chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
68b00e0d852e7430 | Quantum Physics
Stochastic, Granular, Five-Dimensional Space-Time:a Root Model for Both Relativity and Quantum Mechanics,and a New Interpretation of Time
Authors: Carlton Frederick
Abstract A stochastic model is presented for the Planck-scale nature of space-time. From it, many features of quantum mechanics and relativity are derived. As mathematical points have no extent, the stochastic manifold cannot be tessellated with points (if the points are independently mobile) and so a granular model is required. For Lorentz invariance, the grains cannot have constant dimensions but instead, constant volumes. We treat both space and time stochastically and thus require a new interpretation of time to prevent an object being in multiple places at the same time. As the grains do have a definite volume, a mechanism is required to create and annihilate grains (without leaving gaps in space-time) as the universe, or parts thereof, expands or contracts. A 'rolled-up' fifth dimension provides the mechanism. As this is a 'root' model, it attempts to explicate phenomena usually taken for granted, such as gravity and the nature of time. From geometric considerations alone, both the General Relativity field equations (the master equations of Relativity) and the Schrödinger equation (the master equation of quantum mechanics) are produced.
Comments: 30 Pages.
Download: PDF
Submission history
[v1] 2019-05-13 17:12:38
[v2] 2019-05-26 10:03:22
[v3] 2019-06-25 14:00:34
Unique-IP document downloads: 11 times
Add your own feedback and questions here:
comments powered by Disqus |
55faa02fc08fc48c | Tuesday, December 18, 2012
Politics where it doesn't belong
I was reading and studying one of the most exciting things I have discovered in the years of research into energy and alternative power systems when I encountered something that just didn't fit.
One of the sources that I had come to trust to impartially deliver new discoveries had stated an opinion regarding a political agenda of an inventor he was reporting on.
This is such the norm on mainstream media that it doesn't alarm most people but was not to be expected from this source so I had to disagree and make a statement which I will share here in case it doesn't get approved on the site where I wrote it which is here:
My primary argument is that this political bias should have no business in the forum that it was placed.
Regardless of accuracy of validity (which honestly, I completely disagree with) it just should be left out.
The most alarming thing is that for some reason the reporter of technology takes it upon himself to voice a political argument instead of reporting.
Mr. Keshe has something that will benefit every man woman and child on the planet if realized.
From exhaustive research and watching every video available from Mr. Keshe as well as reading the contents of the forum, most notably all of Mr. Keshe's posts I have heard him repeatedly state that he is not affiliated with or endorsing the government or governmental system of any country.
What he does state is that he is working to release the new technology to all the world and is cooperating with every government.
He is moving forward with considerable counter intention and having to find ways to get this work into the hands of the masses everywhere at the same time.
His interest is in Space travel not politics. All I have experienced from his interviews is a posture of benevolence.
That being said the changes in economy would be irrelevant once the banking cartel are arrested.
This world is run by bankers who have ingrained in everyone the want for and valuation of the apparently scarce commodity therein.
This is changing in the future as the majority gets educated to the criminality of energy education control to keep the masses ignorant of real options available.
Our primary objective should be the non biased release of solutions to everyone of every political standing.
My understanding is that Mr Keshe is not interested in proposing Communistic or Socialist agendas.
But when energy travel and food are free it will free us all from the slavery we are in and allow for real evolution of mankind to reach higher and create more of our lives than the pursuit of money.
Confusion is usually derived from a lack of understanding on some finite point.
American politics will not survive in it's current form.
The tyranny that every other country knows America to exemplify has no place in the future and as such should be understood as a failed corrupt system to be observed as an example of what happens to a beautiful ideal when criminally insane individuals gain too much power over the exchange systems of a country.
Our Constitution and the Bill of Rights are far from the standard practice today.
If we went back to it things would start to immediately improve.
The FED is an illegal private criminal organization that has robbed the entire world blind and the group in control of it is responsible for the deaths of millions worldwide since it's inception under President Wilson.
It is these same few Uninformed political agendas that have misguided the people and obfuscated the reality of technology that could resolve all the issues of war and famine worldwide since the time of Nikola Tesla. J.P. Morgan betrayed every person on the planet for his own profit, not once when he retracted funding from Tesla at a critical moment when Wardencliff was nearly finished but also at the helm of the piracy that is now the Federal Reserve Bank.
We have a social responsibility to remain impartial when steering the attention of large numbers.
It remains true that the only way our race will survive is to learn and teach others primarily to think for themselves.
Secondarily to impart the understanding that on must seek out facts rather than opinion and know the difference.
Our opinions are derived from our understanding and from past experiences that we weigh data against to predict outcomes.
My opinion about this particular article is that these are a few points taken out of context and expanded on by rhetoric.
Which I will have to flatly say has no place in this type of forum.
I appreciate all the diligent reporting up to this point that you have done Sterling.
It is without a doubt a valuable service that you have provided mankind.
I believe a great many love you for it.
But I think you would do well in the future to keep these type of rants to a separate forum or on a blog that is posed to ask questions in an open minded way. Or let others rant and merely maintain an open forum for opinions.
Monday, December 10, 2012
Watts in your wallet.
So I've been very busy the last couple of weeks.
We meaning me and the very cool people who have supported my project "Around the world on 80 Watts".
Have been doing a lot of research. Now for many that means digging through what other people say on the web, but as an advocate of the "Think for Yourself" campaign I personally go about putting theory to the test and duplicate experiments done by others to prove things can be done. For those of you who don't know, that is called scientific method.
So I'll get off my soapbox and report.
The campaign video says, "We found a 130 research vessel in Alaska and we are going to sail it around the world on 80 watts."
I still intend to do that but we may have to get another boat because I don't think we are going to have the funding in time to save that particular boat.
I have found several boats that are equally good for this venture though more expensive and will give updates for that later.
So for the experiments I have been doing.
I found an even better candidate for the charger system based on John Bedini's work called a joule thief.
This little charger is basically silent and charges anything from a AAA battery to a 12 volt car battery.
I have been running many experiments with it with some very promising results.
I have also been doing experiments with electrolysis in water and generating hydrogen.
The reason for that is that nearly any normally aspirated engine can be run on Hydrogen without major modifications.
That being said I have video to demonstrate.
To further this project I am doing a fundraising campaign and fully intend to endorse the sources of the technologies that I discover as well as when possible do interviews of the individuals responsible for their inception.
I think a high quality video production on these alternative energy devices is long overdue and would like to be the one to do it.
So here evolves the purpose of the project and what I think is really going to be the Mission Statement for the production.
"To reveal and demonstrate working alternative technologies and support the innovators that create them."
This really is a huge undertaking and I appreciate all the support I have gotten so far.
For instance the people who have directly funded the campaign on Indiegogo and to the individual who donated the jetboat we have now for sale to further research and production.
We still need a lot more help and you can do just that by sharing the campaign link or contributing, telling your friends, commenting there on the campaign or the Youtube channel or of course here on the blog.
There is so much more to come and I am more excited about this project than anything I have ever been involved in.
What does this meant to you?
I will show all what works and how you can integrate it into your daily lives as well as provide links for products from the inventors and the companies that support them.
You will save a great deal of money by employing these technologies and we get to make discoveries that could change the way we look at energy, transportation and lower the carbon footprint of the entire country.
These are the Watts in your wallet and our mission statement to you and all that help and tune in to the show.
Tuesday, November 13, 2012
Around the World on 80 Watts
I've been doing a lot of research over the years and recently made a decision to do something with my findings.
There are a lot of people out there that are creating and innovating but the world at large is still under the control of the mass deception that keeps us all in bondage to the way things are told us they are.
So it might be extreme but no one ever made any significant change by being conservative.
This is a preliminary campaign to get the interest of more innovators and investors that can really help this project get somewhere.
What if the reality of universal energy was fully understood by everyone?
What if we no longer had to pay for the energy and propulsion we use on a daily basis and we could cooperate to change things for the better worldwide.
Profit to the few is eating all our resources and destroying our planet starving people all over the world and we are all too busy to do anything about it just trying to survive and pay our bills.
Take a look at the campaign and climb aboard the ship to the future survival for not just a few rich minority, but all of us. A social evolution for the planet your home town and neighborhood.
The goal is to create a place for this testing and innovation to take place in a public forum. using the tools of modern life and social media to show the truth as it really is without the agenda of profiteers.
What if we really had the solution?
Can you put aside doubt for just a minute and entertain the possibilities?
Monday, October 15, 2012
Removing barriers between us
The world revolves around contacts, connections and information.
Some of the biggest problems we have are simple misunderstandings.
Often times these are simply that we have a barrier of language.
As a reference look to the challenges of our political control.
Do you really believe that the fundamentals of any religion would support the slaughter of non believers or justify harming another over beliefs...
The answer is no, not one.
The agenda of some few to keep the rest of us separated,
only serves to help us remain slaves
to current systems and the agenda
of those who have been taking the very life from our society.
Anything that can be used to convince us that there are racial superiorities or inferiorities is a lie,
a trap and only serves to keep you and me as slaves.
Some very interesting facts about the origin of our modern religious practices can be found here. http://structural-communication.com/Articles/paterfamilias-stclair.html
The very origin of the word God is hidden and unknown.
The dictionary derivation states (from the German Gud)
Which is false as the word predates the Germanic term and is not accurate.
The word is an acronym.
The Romans sent sentries to the east to find their concept of the devine.
They still had terms like the Father and Yaweh.
Which is also inaccurate as a name as it translates roughly into (He, who proves himself to be)
With the third party reference It does not reveal a named conciousness or diety but an idea or an observation.
More accurately the word would start with a Aleph instead of a Yud Making it
I, who prove myself to be.
Still not a name but a reference to an awareness stating for itself that indeed, I AM.
The determination for the word God comes from that same line of derivation.
Sanskrit to Latin
Agni - Genera
Varuna- Opera
Mithra- Demoli
This is also the symbolism behind the Ohm
It is said that the sentries returned with the Om.
There is no real separation.
Only what we allow to divide us.
These statements are not intended to be taken as irrefutable facts but merely to be considered as a reference to how connected we all could be. At the very core of what divides us as a planet, a species a people.
This is after all our home, Earth.
We are all, family.
Wednesday, September 19, 2012
Share your Passion
I often tell those around me to find their passion,
follow their true goals and pursue things that others may tell them is crazy.
How can you know what your limitations are
if you never challenge yourself
or if you always follow the rules?
I have been diligently working on a very big project for some time.
The project has opened my eyes in new ways and opened doors I thought shut for years.
It is evident that I am taking my own advice from above.
I am meeting some very interesting caring people through this work and
finally am in a place I feel I have been working toward for a very long time.
I want to share some little pieces of this project with you along the way.
So here are a few clips that will give you an idea of just how cool this project really is.
This is a Logo Reveal for the project. Not entirely finished but as I said this is a work in progress and I am sharing... Here are a few small segments from the interviews in the project.
I hope you enjoy these....There is a lot more to come.
Yours is the future that you create.
Thursday, September 6, 2012
Censorship by Algorithm, What are Google and Facebook doing to your world view?
Here is a great talk on TED that I thought so important
and so impact-ful that I feel everyone should see it.
You decide for yourself what is important
or would you rather have the internet providers do it automatically for you?
I think that, is robbing us of who we may want to be.
Sure it is probably mathmatically accurate to the 100th decimal but...
How is a computer program or internet algorithm going to decide for me
when I am ready for that step of personal evolution?
Automaticity is often not the most beneficial way,
in fact I think it can be harmful to us if left unchecked.
Think of this.
How would you feel if there was a proposal
to have all the clicks you made on the internet
and all the posts you made to Facebook,
even the rants on your blog,
plugged in to an algorithm
that made your voting decisions for you?
It could be done, right now, with technology we have.
But wouldn't this make your opinions redundant?
Do you really want those whimsical momentary impulse decisions
and cute cat videos to be running your life?
We all want our opinions to be valuable
and I think this young man in this video makes some very good points.
Thanks again to the speaker and to TED for making this content available.
My Point is, Think for Yourself. Because now there are computers doing it for you.
Thursday, August 30, 2012
It better be good.
in this age of the internet.
If you want to get any attention
and build an audience then you have to put in
whatever is needed for the project to be good.
I really mean GOOD.
People will judge your effort in the first few seconds and
you have millions of other distractions and competitors
fighting for the same time with your audience.
One Minute.
Saturday, August 25, 2012
What success is.
I was talking with a producer earlier today and wanted to share something with her that I had made and this video came up.
It is an old project promo that I made which was a huge educational experience for me.
Though I am not with the project anymore they are still doing what they do and a lot of work goes into it.
Success is not what one gets from it,
but who one becomes through it....
I would not be the editor I am today if it hadn't been for this project.
And here is what I made. With the help of a lot of very good people.
If you have any questions about how to do anything here or other questions about production or editing let me know. I would be glad to share here and your question might be selected for my next tutorial.
Thursday, August 23, 2012
The evolution of a project, Being open to greater possibilities
You may have seen the post I made about "the Colter files" with the video I posted here.
I have to admit it was still a little rough.
The project now has evolved into something even more exciting.
We already have 18 interviews shot and many more people are getting involved.
What we realized was that this subject is bigger than we thought or could see at first.
We could have stuck with what we were doing and kept the project small
That is the danger with a topic of this sensitivity.
We decided not to get in the way of it and now the project has grown beyond us.
Project "wEVOLve"
is born and we are more excited than ever.
This weekend I am recruiting more editors to help and will be making new graphics
and a teaser for the project.
We will be launching a crowd funding campaign to increase awareness
and get some resources for post production and production
of storytelling short film to be included in the documentary.
Think of a parable but in video.
I believe that art is only as good as it communicates.
That is what drives me to do what I do and I love it.
So stay tuned and I will keep you updated here with the exciting story as it evolves.
Inciting personal revolution.
Sunday, August 19, 2012
Tips for cameras on the cheap
Here is a little training for you to get started with cameras and choices.
Frailty and the need for confront
The world is full of daily problems for each of us to deal with.
The greatest of which is that we ourselves often can't see how we ourselves can really make positive change.
The limitations we place on ourselves are really the only thing in life that truly limits us.
You yourself right where you are, have the power to change any circumstance.
What will stop you, is literally you.
Now I know you have probably seen the hype out there and the "motivational" Ra Ra.
But I want to give you some tools here that can actually use to a noticeable effect.
When you get overwhelmed there is a great way to get out of that fog of feeling hopeless. I would start with two things, a simple exercise really.
First: Take a walk. Walk around the block where you are and look around.
When I say look around I mean really look, look at the trees and bushes the rocks and cracks in the sidewalk.
Keep walking and looking until you actually feel better, and you will.
The second thing is :
Put order into the things you have to do, by concentrating on just one thing at a time...
Write down the things you have to do, select the one that you can actually do right now all the way to completion and... Do it.
Once that first item is done.
You then find the very next thing you can do right now, and do that. Remember, The thing to focus on here is that you get something done.
Done means you can sign it off and put no attention on it.
If there are things you cannot do right now don't put them on the top of the list.
The next thing we need to understand is how to program something that will take several steps.
Let's take a look at how to do that.
First write down what the finished project or product will look like.
Then take a real look at what it will actually take to get it DONE.
Step by logical step, working from what needs to happen to get started and then what needs to happen right after that.
All the way to the complete end product.
(Which means something valuable that can be exchanged)
Anything can be accomplished with an understanding of what the steps are.
The thing that I find useful to remember, is that as long as I myself decide to be responsible for whatever I need to do I can get it done.
The only limitation on you is the willingness to confront.
Confront is looking at the thing, as it is where it is the way it is.
This is often harder than it may seem just looking at these words but you can do anything you actually apply yourself to if you just know "What it is."
I hope that helps.
If you are interested in more of this leave a comment and stay tuned here.
Camera basics in short, how not to frame a shot
and this is the result.
Here are the basics on the rule of thirds.
Can you tell I am grumpy?
By the way, Magic bullet Denoiser made this key possible.
I didn't have it lit right at all.
Lesson for that coming soon.
Now, go shoot better videos.
My first video for the project "The Colter Files"
Hey all, I have been working on this project for some time now and finally have the first finished promo to show all of you. I will be doing a series of tutorials on how I used low end equipment to create and edit this video and others.
More to come from "The Colter Files".
The value of multiple cameras
Hey all here is a quick video about the difference multiple cameras can make on a low budget shoot. especially in an interview format.
How to create a post on Wordpress.
Here is a video for all those who would like to start using Wordpress but aren't sure how.
Very basic step by step.
I will do some more video tutorials like this to get a bit more in depth to making this an easy system to use. Believe me, there is a whole lot more.......
Saturday, August 18, 2012
Lights for filming on the cheap.
I had some cheap lights fail on me recently but rather than throw them away I did something useful and salvaged something worthwhile and here is how you can do the same.
Friday, July 13, 2012
Making videos for five
The site fiverr.com has a wide range of services that people are willing to do for five.
Here is the link.
See you there!
Thursday, July 5, 2012
Lost in a fog, A man with the wrong name
Fugue State the movie is finally out on Amazon!!http://www.amazon.com/Fugue-State/dp/B008DKDZ60/ref=tmm_aiv_title_0?ie=UTF8&qid=1341514235&sr=8-2The title is from a psychological term for "One who thinks they are someone else, usually accompanied with relocation."This post apocalyptic thriller was the first movie I worked on…
via WordPress http://www.empowernetwork.com/streamingindie/blog/man-with-the-wrong-name/
Tuesday, July 3, 2012
What you think you know is keeping you where you are.
I’ve talked to many people about many things.
The ones that think they already know never learn.
I wish for you the same thing.
Friday, June 29, 2012
Message from Dr. Steven Greer
I've been following this project closely for quite some time and there are many eye opening revelations to be made from checking into the Disclosure project.This is just the latest update from Dr Greer's mailing list.I'm excited to share it with you here.~From Dr. Greer~Please post and circulate widelyThank you for all your generous…
via WordPress http://www.empowernetwork.com/streamingindie/blog/message-from-dr-steven-greer/
Thursday, June 21, 2012
Zero Point Practical
Check out my other blog designed for help and options for the industrious free thinker.
The major corporations sucking the life out of all of us in the form of Gas and energy bills,
do not want you to know
there are better ways to fulfill your own needs in these areas and are driving costs out of reach
and making it harder for us all to survive.
Let's do something about it instead of just succumbing to the system.
Really helping each other where it counts.
I have discovered technologies that are simple, practical and very effective.
I will be sharing those here, promoting them on this and many other pages on the web.
There are many technologies that can make our lives easier and more affordable.
My goal is to make a place to share those
without some hidden agenda
an open discussion format and question answer area that makes it viable for anyone to implement these new technologies on a gradient anyone can understand.
This is a very technical revelatory lecture that says a great deal about where I will be taking this discussion. If you aren't familiar with science and advanced terms in scientific theory you might want to wait until I edit down some of these videos into pertinent digestible pieces for the layman. It would suffice to say that my goal here is not to be overly academic but to give solid reference backing any theories and working prototypes I show. This is the foundation of my argument. Enjoy.
A brief interview with John Bedini. I will be sharing more videos of him and his work here. Personally I feel we all owe him respect for his diligence and patient work in the face of opposition and covert attack. This man has been well established as the pioneer of experimental "open circuit" Radiant Energy. The concept first mentioned by Nikola Tesla in the early 1900's with his Radiant energy antenna patent.
This is one of my personal favorites.
With the basic understanding of how these devices work
One can get plans and build a unit for themselves or support the man who developed the technology by purchasing this kit.
I am not an affiliate for this product.
Nor do I get paid to market for them.
I am just sharing this here because I know it works
and what you can do with these products is up to your own imagination.
The source and place to get more information on these products is here at
Renaissance Charger systems
I have used John Bedini's patent information in my own experiments
and everything I tried worked just like he said.
This is the only source I know of that is approved by him and will support his efforts.
Check it out.
Feel free to make comments here as well.
I would love to open a discussion.
Wednesday, June 20, 2012
Knowing who you can trust
I made some graphics and video with a group some time ago.
I'm not working with them any more, although they never really asked me to do any more video or editing for them. Or really giving credit to all the people that worked so hard in the beginning of the production.
What is rather sad is that a friend put a substantial amount of his own money into this and try as I may to help I was basically undermined by someone who only wanted some weird power struggle.
She eventually destroyed the entire group and I finally left as well.
I warned these guys that she needed to be gotten rid of.
They didn't listen, and everyone quit.
The lesson in this is you need to know who you can trust.
If someone that is not producing anything valuable is demanding attention and vying for control,
eject them from the group. Make it quick and decisive.
Don't let anyone think it was an accident.
There are people that are damaging to a group.
Well anyway Lesson learned. I'm just sorry to those who lost their investment.
They are still trying to make a go of it though and aren't too hard to find if you want to.
Here's the video..
You are of course welcome to look at the before and after of this show and let me know if you can tell the difference.
Heroes, Power and Obscurity by Suppression
There are a few people in a century that think so far outside the box that if positioned right can change the operating basis of the entire world. The significance of discoveries can be as great as Nikola Tesla with Alternating current (which we all use today).
He had many other developments that we know very little about due to the control of markets and of information being stolen, suppressed or just hidden from view.
The general public has no idea how most of his work after the opening of the power plant at Niagra Falls could change the way we deliver electricity and the very source of power that is freely available.
As soon as his financial backer realized that Tesla had a means to deliver power without wires to any house anywhere J.P Morgan stopped paying the worker on the radio site and ruined Tesla's plan.
You see, Morgan's primary source of income was copper wire. Tesla's plan would cost him billions, or so he thought and so we still pay for electricity at our homes and the Morgan family are still one of the richest in the world, while Tesla died in poverty.
There are many pioneers that run into this problem of an economic agenda, this is only one example.
The status quo makes tremendous profits for those that are already on top and change is perceived as a threat.
There are a couple of pioneers that have been wrecking the confusion around electricity and making things available for us to make new discoveries.
I wanted to take this opportunity to share something that has been on the web for a long time and still in obscurity for the lack of faith or understanding.
This is me paying respect to two of my personal Heroes:
Dr. Tom Bearden and John Bedini
This information comes directly from the John's site linked here.
There is a great deal of explanation here that if one takes the time to understand.
You will realize that energy is easily available and in unlimited supply.
This is the first installment of my support for this work.
Proof of concept and theory for the pioneers themselves.
See for yourself this information and these men deserve more attention and respect from the world that will benefit from their perseverance.
One key thing I would like to state here is that:
If you are engaged in experimentation and application of the designs created by someone then you should be Ethical and give credit and a portion of any profits to the originator of the designs.
I personally will never make a dollar from these technologies without giving a percentage to Mr Bedini.
I will be sharing more links where you can get working models of "experimental educational devices"
From an approved Bedini outlet.
This material is here to show theory and practical development.
Welcome to Bedini Technology
We at Bedini Technology, Inc. have developed energy systems for many years, since the early 1970's. We have openly shared many of these discoveries on the pages of this website since the beginning of the internet. Due to recent events, it is becoming increasingly clear that a growing number of people are using ideas from this website, and infringing on my patents without even the courtesy of giving me credit.
We will explain the BTI negative resistor process for taking extra energy from the vacuum. For simplicity, the process will be produced in a common lead acid storage battery. First we will explain the necessary background to understand this process.
An open thermodynamic system such as a windmill receives energy from its active environment. Such a system can change its own potential energy as more wind energy is received. It can also power itself and a load such as a pump to provide water.
The open system can re-order itself. It can self-oscillate or self-rotate. It can output more energy than the operator inputs, because the environment furnishes extra energy. Like the windmill, it can power itself and its load simultaneously. It exhibits what is called “negentropy”. That is, it increases its energy as it receives more energy from its environment. For example, the windmill increases its energy as the wind blows more strongly.
To relate to electrical systems, we can regard the windmill as a “negative resistor” since it accepts unusable wind energy from the environment and transforms it to shaft horsepower to power the load (the pump). In other words, a negative resistor receives energy from the environment in a form not usable by the working load. It transducers the energy into usable form by re-ordering it, and then furnishes the usable energy to the load to power it and do work for us.
For over 100 years, conventional electrical systems have been designed as equilibrium systems. They are symmetrical with their active vacuum environment. They give right back to the vacuum any energy they receive from it. With those systems we have to put in all the energy we get out and use. We must also input some additional energy to cover losses in the system. The ratio of output to input is less than one. We say that these systems have a coefficient of performance or COP less than one. We also refer to them as “underunity” systems.
Nearly 50 years ago, particle physicists discovered that the symmetry of an electrical system with the active vacuum can be broken. So a sort of “windmill” electrical system, in a vacuum energy wind, is permitted. Such a system would be powered by vacuum energy. Wu and his colleagues, and Lee as well, received Nobel Prizes for this and related work. Prigogine later received a Nobel Prize for his contributions to such systems. However, electrical engineers still design power systems with a 136 year old EM theory which has not been changed. The theory does not include extracting and using electrical energy from the active vacuum. Our engineers continue to design power systems the old way.
Any dipole is a broken symmetry in the vacuum energy flux. So the common dipole – simply separating positive and negative charges – provides a negative resistor. The potential (i.e., voltage) between the two ends is a novel energy flow circulation, as shown by Whittaker in 1903. Energy from the vacuum – in the complex plane or what the engineer calls “reactive power” – is continually absorbed by the charges on the ends of the dipole. The charges transduce the absorbed reactive power is into real electrical power, which then pours out from the dipole in all directions. This gushing energy from the vacuum will continue while the dipole lasts. We only have to “pay” once, for initially making the dipole. For example, dipoles in ordinary matter have been pouring out energy extracted from the vacuum, for some 15 billion years.
Batteries and generators do not power their attached circuits! They expend their available internal energy (shaft energy input to the generator, and chemical energy in the battery) to force their own internal charges apart, making a source dipole. That is ALL that batteries and generators do. They do not place a single watt of power on the external circuit, nor do they power any load. Instead, from Whittaker’s work in 1903, the dipole receives vacuum energy (reactive power), transduces it into real power, and continuously pours out that energy along the circuit, filling all space. The circuit intercepts a tiny bit of that energy flow, and powers the load. Every electrical load and circuit is powered by electrical energy extracted from the vacuum. All electrical loads are powered by vacuum energy today.
All the hydrocarbons ever burned, all the fuel rods ever used, all the dams ever built to turn generator shafts, etc. have not added a single watt to the power line. All that enormous effort has done nothing but make power system dipoles. Sadly, our engineers have always made systems so they kill the dipole faster than they can power their loads. So with these archaic systems we have to continue to burn fuel, build nuclear power plants, etc. just to remake the dipoles our systems continually destroy. Simply put, that is not the way to run the railroad.
The Bedini process repeatedly produces a negative resistor inside a battery or other energy storage device for free, or nearly so. Once the negative resistor is momentarily established, a blast of energy leaps from the vacuum onto the charges in the battery and onto the charges in the circuit, which are flash charged with excess energy. The battery is recharged and the load is powered simultaneously.
The Bedini process repeatedly produces a negative resistor inside a battery or other energy storage device for free, or nearly so. Once the negative resistor is momentarily established, the energy leaps from the vacuum onto the battery, which are charged with excess energy. The battery is recharged and the load is powered simultaneously.
A typical system approach is to power the system from one battery, while a second battery or group of them is on “charge” from the negative resistor process. Then the powering battery is switched and the load powered from another one, so that the original battery can be charged very rapidly.
Iteration keeps all batteries charged while continuing to fully power the load. A typical DC output may be converted into standard AC in an ordinary DC-to-AC converter, e.g. to power one’s home. The Bedini process will give birth to very different, decentralized electrical power systems taking their electrical energy directly from the local active vacuum.
We illustrate the enormous amount of energy that any dipole actually converts from the vacuum and outputs. Here is one of the conductors (wires) attached to one terminal of a generator or battery. A large wave flow surrounds the wire, out to an infinite radial distance. This shows the enormous energy flow that is pouring out of the terminals. This is real EM power. As can be seen, most of it misses the circuit entirely and is just wasted. In the wire, we see the free electrons bouncing around, coming to the surface, and intercepting a tiny bit of the passing energy flow – much like placing your hand out of the window of a moving car and diverting some of the passing air flow into the interior. In this wire, only that tiny, tiny bit of energy flow deflected into the wire is used to power the electrons, produce current, and power the circuit. As you can see, every circuit has always been powered by the little bit it is able to catch from an enormous passing energy flow. The entire large energy flow is extracted from the vacuum by the source dipole and poured out of the terminals.
In this animation we show how the energy is received by the dipole from the vacuum as reactive power. The charges then transform their absorbed energy into real usable power and pour it out profusely. An enormous flow of real EM energy results. We must now have a circuit which intercepts and collects some of that huge, gushing energy flow, and dissipates the collected energy in loads. As can be seen, if we make the dipole stronger, we increase the energy flow. If we diminish and destroy the dipole, we diminish and then destroy the gushing EM energy from the vacuum. So then we must pay to restore the dipole.
This animation shows how the Bedini process in a battery forms a negative resistor, which extracts and furnishes vacuum energy. The electron current can only move between the outside of the plates out into and through the external circuit. Between the plates, a very heavy lead ion current sluggishly moves. A pulse of electrons piles immediately up on the edge of the plates, trying to push the lead ions in charging mode.
The ions move very slowly, so that electrons continue to pile up. The density of the electron pileup produces a sudden large potential – a dipolarity. As we showed, this dipolarity produces a sudden blast of much-increased EM energy flow across the ions, adding much greater energy to them. At the same time, the blast of EM energy also travels out into the external circuit, driving the electrons to power the load. In short, momentarily this 12-volt circuit has been freely converted to a 100-volt circuit. Its available power has been increased by a factor of 8 or more.
As the pulse of electron pile-up potential is cut off, the well-known Lenz law reaction is evoked. This momentarily squeezes the electron pileup even more, suddenly raising the voltage to 400 volts. This further increases the available power by an additional factor of 4 or more. So the circuit now has some 32 or more times as much power as it initially had from the battery alone. The collection of the excess energy from the “charging” of the overpotential occurs on the ions charging the battery, and also on the external circuit electrons powering the load. The system has been blasted open and is receiving a great surge of energy from the vacuum. It receives this excess energy from the dipole acting as a true negative resistor. As an analogy, we have converted the system into a sort of “windmill” and triggered the vacuum into providing a very powerful set of wind-blasts to power the windmill.
This animation shows the operation of a typical Bedini power system driving a rotary motor (center) and charging a bank of batteries (top) from a battery (left side). The negative resistor process (shown by the bubbles) in the battery at the left is continually triggered. The “energy” is used to further trigger the negative resistor process in each battery in the bank at the top. An AC to DC inverter is connected to the battery bank, so that standard AC power is output to the normal electrical wiring system of the house, office, etc. As can be seen, the battery and charging systems are used to extract excess energy from the vacuum, convert it to usable DC form, and collect it. Then the converter changes it to proper AC form to power the house AC, while simultaneously the motor is being powered. In addition, the precise timing and switching for the charging of the system with vacuum energy is mechanically built into a motor system .
This animation shows how the motor/timer/switcher can be arranged in banks to dramatically increase the shaft horsepower. At the same time, additional banks of batteries or other accumulators can be continually charged, so that an entire neighborhood or a large office building can be powered by the system’s larger AC converter not shown. The output can power any shaft horsepower load required. In the future, an adaptation of this approach can power transport vehicles such as automobiles, trucks, trains, boats, etc.
This automation shows a typical home with an installed Bedini power system. Here the batteries are utilized as negative resistors and accumulators. A standard DC to AC converter is also powered, so that standard AC power is furnished to the main power panel of the home. All the usual home appliances and loads are powered in normal fashion. This home is immune to power outages from storms, blown transformers, substation failures, brownouts, or blackouts. Everything is powered by electrical energy obtained directly from the active vacuum.
In this segment we show an actual lab test model that demonstrates the principles of the Bedini process. The main battery is here (point) and you can see the motor here. The motor is doing work by operating a fan blade and pumping air. Accumulators are located here (point) in which energy from the proprietary Bedini transformer (point) is being cumulatively collected eight times for each revolution. Once per revolution, precise switching (point) discharging of the accumulator transformer into the secondary battery (point) to charge it. In this arrangement, we show proof of principle by continuously doing work (pumping air) while continuously keeping the secondary battery charged. Periodically the batteries are switched and the former primary battery is charged. The excess energy comes directly from the active vacuum, through the negative resistor in the battery created by the Bedini process. In addition, we are demonstrating additional energy being obtained from excess collection in the transformer (point) eight times per rotation, and fed into the battery once per revolution to recharge the secondary battery. Another principle shown by this system is the superpolarity of the magnetic motor (point). The magnets all have north poles pointing outward. The compression and repulsion in the middle of any two poles creates a north pole whose field strength is several times larger than the field strength from each magnet. Thus we have formed eight “phantom poles”, to dramatically increase the field energy density in the magnetic field where the special transformer (point) collects additional energy (from the superpole flux cutting one of the coils, eight times per revolution of the rotor. The energy is collected in a accumulator transformer (point) and once per revolution it charges the secondary battery . The system demonstrates that the vacuum energy can be collected in several places and in different ways, collected in a proprietary accumulator transformer, and then used to very powerfully form a sudden negative resistor in the battery (point).
This charges the battery with additional energy from the vacuum as previously explained.
The electrical energy needs of the world are increasing exponentially. At the same time, the world’s oil supplies are peaking and will be gradually decreasing, while becoming ever more expensive to obtain. The easily foreseeable result is first a world energy crisis, now looming, followed by a world economic crisis as prices of transportation, goods, etc. increases. The Bedini Negative resistor process can resolve this crisis that is coming upon us. With Bedini systems and technology, the increasing need for oil can be blunted and controlled, so that the economy levels off while at the same time additional electrical power is provided as needed.
The BTI processes and systems pose no threat to the environment. By blunting and leveling hydrocarbon combustion to produce the increasing electrical power needed, these BTI systems will dramatically reduce the environmental pollution and damage that would otherwise occur. The processes produce clean electrical power, do not require rivers, special conditions for windmills and solar cells, hydrocarbon combustion, or nuclear fuel rod consumption. The BTI systems can be placed anywhere on earth, beneath the earth, in space, or under the ocean’s surface. They will provide clean, cheap electrical energy anywhere, anytime, everywhere, and every time with no detrimental impact to the environment. In addition, their natural decentralization eliminates failure of entire power grids or large sections of it, whether the cause is natural or manmade
BTI is currently working on additional designs that will produce more power on demand and quite flexibly. These systems are adaptable to almost any electrical power system application, from pumping water, powering high speed turbines, etc. The potential for replacing almost every inefficient electrical motor with regenerative systems is obvious. Most industrial and consumer applications can be met by Bedini systems, more economically, cleaner, cheaper, and far more efficiently. Compared to other systems, a BTI power system will always use less and produce more in the same application, and do it cleanly and without pollution.
The company has been granted patent protection and the Bedini processes are patented . Worldwide protection is in process and will be diligently maintained during the patent process. BTI will also be filing many additional patents as the technology further develops to extend and complement the two processes.
You have witnessed what we at BTI believe to be the dawn of a revolutionary new age of efficient and clean electric power. Producing energy at a fraction of its present cost, dependably and reliably, and doing it easily and anywhere, will revolutionize the present systems with their wastes and pollution. The BTI power systems will provide a never-ending source for electrical power and energy so desperately needed by all the peoples and nations of the earth. Providing and maintaining a secure, safe, clean future of plentiful electrical power is our goal and hopefully yours as well.
Keep The Lights On
We at Bedini Technology, Inc. wishes to thank you for viewing our scrolling presentation. Please view our main page for further information.
The Tom Bearden Free Energy Collector Principle
In the paper " The Final Secret of Free Energy " wrote in February 9, 1993, Tom Bearden has described the principle of a device which seems able to tap Free Energy from the energy flow ( the Poynting S-Flow ) in the vaccum during the short transient phase ( the relaxation time in a conductor ) when a source is connected to a resistive load. In this paper, I am trying to clarify a bit, the basic concept of this principle.
Tom Bearden claims that when a Source ( a dipole ) is connected to a resistive load, the most important part of the principle is the information transfered to the load at the speed of light by the S-Flow. The S-Flow is pure EM energy which flows through the space and outside the conductor. This energy is Free and only this part must be used as a "free lunch". Just after this very short time, after that the switch is closed ( the transient phase ), the current begins to flow in the circuit. This transient phase is named the Relaxation Time. In copper, the relaxation time is incredibly rapid, it's about 1.5 x 10-19 sec. When the current flows ( the permanent phase ), the circuit consumes power from the Source and dissipates energy by Joule's Effet, this phase must not be used in our case.
So, according to Tom Bearden, for tapping Free Energy, the purpose is to charge a " Collector " during its relaxation time and then, to switch this Collector to a common resistive load, just before that the electrons begin to flow in the circuit.
<< We took some trapped EM energy density (a chunk of potential gradient, a "voltage" before current flows) from the source, by switching that potential gradient (energy density, which is joules per coulomb) onto a collector (containing a certain number of coulombs of trapped charges) where the potential gradient activates/potentializes/couples-to these temporarily non translating electrons. So the finite collector collected a finite amount of excess energy [joules/coulomb x collecting (trapped) coulombs] on its now-excited (activated) free electrons. Then, before any current has yet flowed from the source, we switched that potentialized collector (with its temporarily restrained but potentialized electrons; with their finite amount of excess trapped EM energy) away from the source and directly across the load. Shortly thereafter, the relaxation time in the collector expires. The potentialized electrons in the collector are freed to move in the external load circuit, consisting of the collector and the load, and so they do so. >> has said Tom Bearden.
For the Collector it is necessary to use a conductive material which has a longer relaxation time than in the copper. This is only for the electronic circuit design and the limitation of its components. So, Tom Bearden has used " a Degenerate Semiconductor " which has a relaxation time of about 1 ms. The Collector is made with 98% Aluminum and 2% Iron.
<< Degenerate semiconductor :
<< Relaxation time :
• A conductor contains large number of loosely bound electrons which we call free electrons or conduction electrons. The remaining material is collection of heavy positive ions called lattice. These ions keeps on vibrating about their mean positions. The average amplitude of vibration depends upon temperature. Occasionally, a free electron collides interacts in some other fashion with the lattice. The speed and direction of electron changes randomly at each such event. As a result electrons moves in a zig-zag path....The average time between two successive collisions in a conductor is called the relaxation time. ( see at : http://www.schooljunction.com/current.htm ) >>
The Bearden's Collector is charged by using a Stepwise Charging method with a ramp voltage generator, this is commonly used in high efficiency and low power consumption CMOS systems which use an Adiabatic Charging method ( see Charge Recycling Clocking for Adiatbatic Style Logic by Luns Tee, Lizhen Zheng ). With this Stepwise Charging method very few energy is required for charging the Collector. If the Collector is a common capacitor the efficiency is nearly close to 100%. With the Bearden's Collector, this method is used only for transfering the potential. The ramp duration of the voltage must be less than the relaxation time of the Collector used. So, there is no current flow in the circuit ( dQ/dt ~ 0 ) during the charging sequence. When the Collector is fully charged, all the free electrons are "potentialized", they have their own kinetic energy gained by the potential only produced by the S-Flow. The next step is to use these "potentialized electrons "by switching the circuit on the Load, now, the Collector acts as a Free Source of Energy, it acts as a dipole energized by only the S-Flow of the original source ( V1 in the diagram below).....
( This diagram has been updated on July 11, 2001 according to the latest comments from Tom Bearden ( see below ) )
I hope that, with this short description, I have been able to clarify a bit the Tom Bearden's " Final Secret of Free Energy ". Now, only a real working device will prove if his claim can be more than a simple overunity dream...
Source documents :
Thomas Bearden Answers Jerry Decker on Free Energy
I only have time every so many weeks to try to answer such questions.
I'll take some time to try to give you a complete answer, but do not wish to
enter into protracted discussions etc. I'm on a very reduced schedule
anyway, because of the illness, and so only have a little time to spare at
infrequent intervals.
You will never have the answer to the true negative resistor problem or
understand it, until you read the physics literature and study something
beside standard classical electrodynamics and electrical engineering.
Those disciplines and models completely forbid any COP>1.0 system, and any true
negative resistor is a COP = infinity system. SO WHAT MUST BE CHANGED OR
MODIFIED IN THOSE EM AND EE MODELS, IF ONE IS TO EVEN HAVE A COP>1.0 SYSTEM AT ALL? Anyone who is not struggling with that problem, has no business calling himself in the "free energy field". He's not. He's automatically
in the "Well, it's not in conventional EE, so I can't understand it"
field. EE is based on a very archaic and seriously flawed EM model that does not
permit COP>1.0 circuits and systems. Much better electrodynamics models
have long been available in particle physics -- for the simple reason that
the standard EE does not adequately describe nature.
The answer to many of your questions and speculations are already there in
particle physics, and have been for a long time. But one has to read the
physics literature. Sadly, most of the "free energy" community will not
read the literature, will not go look up and read a cited reference or
quotation, etc. and try to understand it. So there exists a "mindset" in
the free energy community, which largely regurgitates classical
electrodynamics and standard electrical engineering, BOTH MODELS of which
specifically prohibit COP>1.0 EM systems in the first place! As an
example, to do COP>1.0 in an EM circuit, that circuit has to violate the
second law of thermodynamics. Where is the discussion in the "free
energy" community about that, and how to do it? Further, it has to violate the
standard closed-current-loop circuit, and it has to violate the arbitrary
Lorentz symmetrical regauging of the Maxwell-Heaviside equations. Where
are the fruitful discussions of the methods for doing those two things?
Well, most do not LIKE such areas. Sorry, but those are the areas that
one must grapple with, if one wishes to grapple with overunity processes and
mechanisms. If the gold is on the right side of the fence and one
persists in looking only on the left side, one should not be surprised that he
never finds the gold. We have to take physics as it comes on its own terms. We
simply cannot dictate what the physics "ought to be", but only try to find
out "what it is". One can point out answers and the exact citations from physics, and we've done that in spades. Then if the community still will not deviate from
CEM and EE, and will not discuss the technical requirements for a COP>1.0
system, then all further discussions with the community are useless. Yet
strangely, those who have never even seen an overunity system or circuit,
much less tested one, seem to assume that they already completely
understand the entire field that is not yet even a field. Merely because they
understand CEM or electrical engineering!
When I wrote the paper on how Bedini is able to generate a true negative
resistor at the boundary (inner surface of the plates) inside a battery,
for the conference that year in Russia, I specifically asked the Russian
scientists to first subject the paper and its explanation to rigorous
analysis, to find if there were any flaws. After that refereeing check
was performed by some excellent Russian scientists, the answer came back that
the paper was okay and would stand up, and was recommended for
publication. Whereupon I submitted the paper to them for presentation in absentia, and
for publication in the proceedings.
You are aware, I think, that there is no real contiguous closed electron
current loop in a battery powered circuit, contrary to the standard
circuit diagram. Instead, there are two very different current half-loops: (1) the
ion current between the plates, completely internal to the battery, and
(2) the electron current half loop, from the outside of one plate through the
external circuit to the outside of the other plate. The mass per unit
charge of the lead ions in a battery is enormously greater (several hundred
thousand times greater) than the mass per unit charge of the electrons.
So the electrons respond very much faster than the sluggish ions. Ergo, one
can readily dephase the two currents, because of the sluggishness of the
ions compared to the rapidity of the electrons. Piece of cake, with the
proper timing.
Now to pause: suppose you set a "scalar" potential upon the middle of a
transmission line. It doesn't sit there like a "scalar" entity at all!
Instead, it takes off in both directions simultaneously, like two scalded
hogs, nearly at the speed of light. It potentializes the charges in one
direction almost instantly and it also potentializes the charge in the
other direction almost instantly. PLEASE NOTE THAT THE CHARGES TO THE LEFT HAVE A FORCE TO THE LEFT CREATED ON THEM, AND THE CHARGES TO THE RIGHT HAVE A FORCE TO THE RIGHT CREATED ON THEM. If you catch the ions in the charging mode, you can thus reverse the electron current in the external circuit with overpotentialized electrons, while simultaneously overpotentializing the ions in charging mode. This means that excess energy is delivered to powering the external circuit, while excess energy is simultaneously
delivered to the ions in charging mode.
It's as simple as that.
Microwave switching engineer Bill Nelson and engineer Ron Cole had
absolutely no difficulty in reproducing the Bedini process in the 1980s.
Neither did Jim Watson, who later developed and demonstrated an 8 KW
Now suppose you suddenly place a potential on the surface of the plates
(between the two plates) of a battery. That potential takes off like a
scalded hog in both directions. It flows across the ions in the battery
between the plates in one direction, and simultaneously it flows out into
the external circuits to "push the charges" in the other direction.
In short, if you time things correctly, you can DEPHASE and DECOUPLE the
two currents in the battery powered system, simultaneously adding potential
energy to both of them, "for free". You can add potential to BOTH the
ions and the electrons. The ions can be moving backward in charging mode,
while the electrons will be driven in the opposite direction in the external
circuit --- in powering direction.
Before one gets bent out of shape about the potential being regauging and
all that, and free additional potential energy and all that, one should go
look up what the "gauge freedom" axiom of quantum field theory means. All
electrodynamicists --- and even the electrical engineers --- assume that
the potential energy of any Maxwellian system can be freely changed at will.
However, they usually assume you will be a gentleman and do it twice
simultaneously, and will also do it just exactly so that the two new free
EM forces produced in the system are equal and opposite. Well, that assumes
that you take in free excess potential energy to the system, but precisely
lock it up so that it cannot translate electrons and therefore push
current and do work in an external load. However, it continuously performs what is
called "internal work" in the system, in opposing directions but equal
magnitude. That work continually forms and maintains excess "stress
energy" in the system, and that is all.
So the first problem for a COP>1.0 system is how to break up that "stress
energy only" assumption. John's way is one way. He actually "splits" the
potential into two directional fields (which it is; see Whittaker 1903,
cited in numerous of my papers), one going in one direction to push the
ions in charging mode, and the other going in the other direction out into the
external circuit to push electrons in powering mode.
That's about as simple as it can be explained. At that point, one either
understands it or one doesn't.
Also, bear in mind that from any nonzero scalar potential phi, regardless
of how small in magnitude, you can collect as much energy as you wish, if you
just have enough charge available to intercept it. That's the simple
equation W = (phi)q, where W is the amount of energy collected in joules
from potential phi, by charges q in coulombs. For a given phi and a
desired W, just include the necessary q. A potential is a set of bidirectional
rivers of flowing energy, as proven by Whittaker in 1903. We do not have
to REPROVE that at all; it's already well known and accepted by every
electrodynamicist worth his salt.
Any potential is automatically a true negative resistor, since it is a
free harmonic set of bidirectional flows of EM energy (due to its dipolarity
and the broken symmetry of same; it takes the energy right out of the vacuum
via the broken symmetry of the source charge or dipolarity). Hence you can
collect as much energy from it as you wish, from its "flowing rivers of
energy", if you arrange for enough charges (buckets) to collect it (to
collect the water). Nothing says you have to use just one kind of charge
(the electron). You can use -- as Bedini does -- both the ions between the
plates and the electrons in the external circuit. And you can use them
both, and potentialize them both simultaneously with the same potential.
There's no mystery as to how he makes a negative resistor, because ANY AND
EVERY DIPOLARITY AND POTENTIAL ARE ALREADY TRUE NEGATIVE RESISTORS. As is every charge. The energy flows are coming freely from the vacuum, via the proven (in particle physics, NOT in EE) broken symmetry of the source
charge and source dipole. Remember, the first requirement for an overunity
system or true negative resistor is TO GET OUT OF CLASSICAL ELECTRODYNAMICS AND ELECTRICAL ENGINEERING. If one cannot think outside those boxes, one will never get or understand overunity, because IT IS COMPLETELY OUTSIDE THOSE TWO BOXES.
Every charge in the universe is already a true negative resistor of the
purest and most definitive (and easily demonstrated experimentally) kind.
It freely absorbs virtual photons from the seething vacuum, transduces
that into OBSERVABLE (real, detectable, usable) photons, and pours them out in
all directions in 3-space at the speed of light. One doesn't have to
reprove that; it's been proven in physics since 1957.
You want to make a true MACROSCOPIC negative resistor for peanuts? Just
lay a charged capacitor on a permanent magnet so that the E field of the cap
is at right angles to the H-field of the magnet. That optimizes EXH, which
is the expression for the Poynting energy flow S = f(EXH). That silly thing
sits there and steadily pours out real observable usable EM energy EXH at
the speed of light, with no OBSERVABLE electromagnetic energy input into
it. The fact that it is a continuous flow of energy is usually just "mumbled
away"; e.g., with some version of this quotation: "[Poynting's result]
Before one falls for that "static" nonsense, one must understand what
"static" really is. That's expressed beautifully by Van Flandern, as
follows: "To retain causality, we must distinguish two distinct meanings
of the term 'static'. One meaning is unchanging in the sense of no moving
replacement of all moving parts. We can visualize this difference by
transferring momentum, and is made of entities that propagate. [Tom Van
Flandern, "The speed of gravity - What the experiments say," Physics
Letters A, Vol. 250, Dec. 21, 1998, p.8-9. ]
From the Whittaker papers of 1903 and 1904, we have known for just about a
century that all static EM fields and potentials are in fact "static"
fields of Van Flandern's second kind --- analogous to an unfrozen waterfall.
There is a continuous bidirectional movement of an internal EM structure of
longitudinal waves inside (and comprising) all EM fields and potentials.
So the "static envelope" of the field exists, but the "inside" components are
in violent change and motion, in BOTH directions. Again, that's been known
and in the literature since 1903.
But that does not appear in the hoary old seriously flawed electrical
engineering, which continues to try to consider the static potential and
static field as a "frozen waterfall" analogy.
Neither does the solution for the source of the input energy to the source
charge, nor the form of that energy input, appear in the CEM and EE
models. The CEM and EE models do not even model the vacuum flux exchange with the
charge, much less a broken symmetry in that exchange.
So they do not even model what powers every electrical circuit. Period.
Never have.
If one wishes to tangle with true negative resistance, then one should
just try to answer (in classical EM only, such as electrical engineering) the
question of from where and how a given charge gets the EM energy that it
continuously pours out, establishing its fields and potentials and their
energy across the universe at the speed of light. If one cannot answer
that question in classical EM and electrical engineering, one will then have to
go read some physics, because it's been answered for 45 years in particle
physics, and a Nobel Prize was awarded to Lee and Yang in 1957 for their
having predicted the basis for that solution. Broken symmetry was such a
tremendous revolution to all of physics that the Nobel Committee moved
with unprecedented speed in awarding that Nobel Prize to Lee and Yang. They
strongly predicted it in 1956-early 1957, and Wu and her colleagues proved
it experimentally in early 1957. The Nobel Prize was then awarded to Lee
and Yang in that same year, in Dec. 1957 -- a nearly unprecedented action.
It would be nice if the electrical engineering departments would walk
across the campus to the particle physics departments, and find out just what
broken symmetry means for the source charge and the source dipole. Voila!
Suddenly they would find out what actually powers every EM circuit and
system, and that the energy --- all of it, every joule of it -- comes from
the seething vacuum via the asymmetry of the source charge or dipole.
They haven't seemed to be able to do that arduous little walk across the campus
task in 45 years now. And they have not changed their model to include the
active vacuum and the broken symmetry in the vacuum exchange with the
charge and the dipole.
If one cannot solve the source charge problem and present that solution
(as CEM and EE cannot do), then one is guilty of implicitly assuming that
every charge in the universe is a perpetual motion machine, freely creating
energy from nothing. That is precisely the case for every electrical engineering
department, professor, and textbook today, and it always has been.
It is quite humorous -- and downright eerie -- that the very fellows so
critical of the overunity researchers as a "bunch of perpetual motion
nuts" also implicitly assume, albeit unwittingly, that every charge in the
universe is a perpetual motion machine, freely and continuously creating
energy out of nothing. Poetic justice.
Further, the charge exhibits giant, continuously increasing negentropy,
because the energy it continuously pours out at a steady and unwavering
rate is not disordered but perfectly ordered. At a given radial distance from
the source charge, the associated field has a specific value and
direction, the associated static potential has a specific value, and the associated
vector potential has a specific value and direction, deterministically and
perfectly ordered.
Well, the very notion of entropy always had a serious flaw anyway. It
pre-assumes that a negentropic operation at least equal to whatever the
entropy is, must have first occurred. Otherwise there could have been no
order in the first place, to SUBSEQUENTLY disorder.
And the solution to the source charge problem provides the answer of where
all that negentropy first comes from, to continuously produce the
negentropy (order) that is later disordered in entropic processes.
So the mere existence of electrodynamics and its giant negentropy and
increasing order of the fields and potentials being poured out of the
source charges destroys any notion of absoluteness in the second law of
thermodynamics (the law of continual increase in disorder, or continuously
increasing entropy).
It has long been recognized that the second law (which is based on
statistical mechanics) does not apply to the single ion, charged particle,
atom, molecule, or group of molecules. At the microscopic level, all
reactions are reversible because the equations are reversible. So things
can run backwards as well as forward at the microscopic level, which is a
form of time-reversal. In a "running backwards" situation, if
macroscopic, then an ordinary resistor would act as a true negative resistor (and so it
does, if you feed it negative energy which is time-reversed energy). My
new book, just coming off the presses, uses that fact to explain cold fusion,
and we give the specific reaction equations producing the excess
deuterium, tritium, and alpha particles --- as well as explaining the strange and
anomalous instrumental problems encountered for some years in rigorous
electrolyte experiments at U.S. Naval research facilities at China Lake.
But it has also long been accepted somewhat dogmatically that, well, the
second law does still irrevocably apply to MACROSCOPIC phenomena and size.
Some things recently have happened to upset or "bother" even that standard
First, Denis Evans et al. of the National Australian University have
rigorously proven that, contrary to previous assumptions, reactions can
"run backwards" at up to micron (colloidal) scale, and for up to TWO SECONDS.
Now that's within easy switching range for modern circuits and processes. So
all of a sudden it becomes important. The nanobots being widely developed
just now in nanotechnology a close to molecular size will thus experience
abrupt periods of "running backwards" and so they will not work at all in
the same manner as their much larger robots. The reference on the Evans
work is G. M. Wang, E. M. Sevick, Emil Mittag, Debra J. Searles, and Denis
J. Evans, "Experimental Demonstration of Violations of the Second Law of
Thermodynamics for Small Systems and Short Time Scales," Phys. Rev. Lett.,
89(5), 29 July 2002, 050601. A good article to read on what it all means,
is Steven K. Blau, "The Unusual Thermodynamics of Microscopic Systems,"
Physics Today, 55(9), Sep. 2002, p. 19-21. There are other comments on
the Evans et al. work; you can take your choice based on the smugness and
dogma used in the comments.
The individual charged particle, being microscopic (including even an ion
in a solution) comes under the reversible criterion and therefore is
appreciably "immune" to the second law. So one is not too disconcerted to
find it "running backwards" and pouring out real energy, at last for a
short time. In short, one is not surprised that it produces giant negentropy,
FOR A SHORT TIME. What is surprising (and bewildering to classical EM and to
the classical thermodynamicists) is that the charge produces negentropy
CONTINUOUSLY, for any length of time. So it produces continuously
increasing NEGENTROPY.
There are other areas that are also known and recognized to violate
thermodynamics, including in the large macroscopic realm. Several of
these are listed on p. 459 of Dilip Kondepudi and Ilya Prigogine, Modern
Thermodynamics: From Heat Engines to Dissipative Structures, Wiley, 1998,
corrected printing in 1999. Quoting p. 459: "Some of these areas are (1)
"... rarefied media, where the idea of local equilibrium fails. The
average energy at each point depends on the temperature at the boundaries.
Important astrophysical situations belong to this category." (2)
"...strong gradients, where we expect the failure of linear laws such as the Fourier
law for heat conduction. Not much is known either experimentally or
theoretically. Attempts to introduce such nonlinear outcomes ... have led
to 'extended thermodynamics' ." (3) "...memory effects which appear for
long times (as compared to characteristic relaxation times).
...non-equilibrium processes may have 'long time-tails'...".
Forefront scientists are attempting to extend thermodynamics at present,
to include (hopefully) some kind of explanation for these areas.
But what is important is that the energy continuously poured out by every
magnetic or electrical charge (as a true negative resistor, extracting
unusable energy from the vacuum and pouring it out in usable EM form)
forms perfect order, perfectly correlated to that charge, to any macroscopic
size one wishes. Just pick a size and wait long enough for the speed of light
to reach that radial distance, and you will have a volume of that radius that
has been filled with perfectly ordered EM energy from that source charge.
The original charges in original matter in the universe have been doing
that for 14 billion years, and they are still going. And their perfectly
ordered fields and potentials reach across the entire observable universe.
So every part of electrodynamics --- the source charge, the field, the
potential, and every joule of EM energy in every EM field and potential,
whether in space or in matter --- is in total violation of the second law
of thermodynamics, and TO ANY MACROSCOPIC SIZE LEVEL ONE WISHES, INCLUDING ACROSS THE ENTIRE UNIVERSE when one accounts the perfect and continually increasing order of the fields and potentials and their energy.
So there you have your true negative resistor (not to be confused with the
silly tunnel diode, which "puts some energy back to the circuit power
source in reverse against the voltage" while eating lots more energy from the
power source as work performed to allow it to be done) in every charge in the
universe. And all EM energy -- in every field, potential, and circuit and
system --- comes directly from the vacuum, via the broken symmetry of the
source charge.
Don't underrate the importance of the source charge problem. Either one
has to have a solution to that problem, or else one must surrender the
conservation of energy law in its entirety, since it is totally falsified
by every charge in the universe unless the source charge solution from
particle physics is included in one's model. For the EE model and CEM, that would
require drastic surgery and extension of the models. Actually, much
better systems of electrodynamics are already created and available in particle
As we said, classical electrodynamics and electrical engineering do not
include the active vacuum in their model, nor therefore the broken
symmetry in the exchange between the active vacuum and every charge and dipole in
the circuit. Since those models do not include the actual source of any or
all the EM energy in a circuit or system, then those models do not include
what powers an electrical circuit or system (some of that very energy that is
extracted from the vacuum via the source charge's broken symmetry).
That was all excusable until 1957. Today it is inexcusable, once one
points out the solution sitting there in particle physics.
And if you really wish to get at this matter of energy flow really well,
then read the original papers of Heaviside and Poynting, who independently
and simultaneously in the 1880s discovered the propagation of EM energy
in space, after Maxwell was already dead. Before that, the concept did not
even appear in physics. The primary energy flow connected with a circuit
actually flows outside the conductors, in the external space. A tiny bit
of it (the Poynting component) is diverged into the circuit conductors to
power the electrons. The huge remainder (the Heaviside nondiverged energy flow
component, which is in circulation form) is not diverged into the circuit
at all, but is just wasted and ignored. Lorentz in the 1890s stated that,
well, it has no physical significance (because it does not do anything),
so he originated a clever little integration trick to get rid of all
accountability of it. The abandoned and unaccounted Heaviside component
may have a magnitude up to a trillion times or more, of the magnitude of the
Poynting component.
I am working on a paper that points out some very startling and completely
unexpected things that are indeed "done" by that long neglected Heaviside
component. It plays a major role in the appearance of the various ice
ages upon the Earth, and creates the excess gravity that is holding the arms of
the spiral galaxies intact (Heaviside himself recognized the gravitational
implications of his extra component, and dealt with it in his notes, but
did not live to publish it. The notes were found in 1957 (curious
coincidence!) and published by one of the learned societies. If applied properly, the
Heaviside component also plays the major role in producing the mysterious
antigravity that is accelerating the expansion of the universe; I explain
that in my forthcoming book, just now coming off the presses. The
Bohren-type experiment (with the so-called "negative resonance absorption
of the medium") is also an experiment routinely done by nonlinear optical
departments. It outputs 18 or so times as much energy as one inputs.
There are some other important contributions of the Heaviside component that I
will include in the paper, which will require another two or three months
to finish.However, my main point is this: When the long-unaccounted --- ARBITRARILY excluded! --- Heaviside energy flow component is re-accounted, then every generator and battery and dipolar power source in the universe already pours out enormously more EM energy than the mechanical shaft energy input to
the generator, the chemical energy dissipated in the battery, and so on. All
of them always have. One can experimentally demonstrate the existence of
that long-neglected component, by a Bohren-type experiment. See Craig F.
Bohren,"How can a particle absorb more than the light incident on it?" American
Journal of Physics, 51(4), Apr. 1983, p. 323-327. Under nonlinear
such particles and insulating particles at infrared frequencies are another.
See also H. Paul and R. Fischer, {Comment on "How can a particle absorb more
than the light incident on it?'}," Am. J. Phys., 51(4), Apr. 1983, p. 327.
The Bohren experiment is repeatable and produces COP = 18.
Anyway, you have true negative resistors everywhere you turn: in every
charge in the universe, and every power source also if you re-account for
the long-neglected Heaviside nondiverged energy flow component associated
with every field/charge and potential/charge interaction.
Tom Bearden
johnmot1.jpg (61003 bytes)
US Patent#6392370
John Bedini and Thomas Bearden Have been working on these systems now for over 30 years. One is driving his car and keeps crossing the same river over and over. Then the light bulb in your head goes on, he begins to think what does this mean? It's "Natures Open System".
The very next thing to do is to stick a paddlewheel into the river, this is where we stop for we have just created a open system to the paddlewheel, everything from the shaft to the generator to your load is now in a closed path, but the river is "FREE" and "OPEN" What Electrical Engineers do is take the output of the river and bring it back to the input of the river and then pump the hell out of the paddlewheel to keep the river moving. "This is called closing the loop". With this type of system you can NEVER GET A >COP of 1 or BETTER<. The source is " Natures Open System" ."You walk around in this system every day and fail to see how it works". What does this mean for electrical circuits, It Means you can never collect anything that you do not pay for, for you are forever pumping that river. The Universe is a open system, a continually running river. All you must do is find where to put the paddlewheel and not close the loop.
John Bedini
Kron, Gabriel. “Now a value E of the negative resistances, at which the generator current becomes zero, represents a state at which the circuit is self-supporting and has a continuous existence of its own without the presence of the generator, as the negative resistances just supply the energy consumed by the positive resistances. (If the circuit contains inductors and capacitors, the circuit is a resonant circuit and it oscillates at its basic frequency.) … When the generator current is positive the circuit draws energy from the source, and when the current is negative the circuit pumps back energy into the source. At zero generator current the circuit neither gives nor takes energy, and theoretically the generator may be removed.” Gabriel Kron, “Electric circuit models of the Schrödinger equation,” Phys. Rev. 67(1-2), Jan. 1 and 15, 1945, p. 41.
Kron, Gabriel. "...the missing concept of "open-paths" (the dual of "closed-paths") was discovered, in which currents could be made to flow in branches that lie between any set of two nodes. (Previously — following Maxwell — engineers tied all of their open-paths to a single datum-point, the 'ground'). That discovery of open-paths established a second rectangular transformation matrix... which created 'lamellar' currents..." "A network with the simultaneous presence of both closed and open paths was the answer to the author's years-long search." Gabriel Kron, "The Frustrating Search for a Geometrical Model of Electrodynamic Networks," Journal unk., issue unk., circa 1962, p. 111-128. The quote is from p. 114.
Kron, Gabriel. . "When only positive and negative real numbers exist, it is customary to replace a positive resistance by an inductance and a negative resistance by a capacitor (since none or only a few negative resistances exist on practical network analyzers.)" Gabriel Kron, "Numerical solution of ordinary and partial differential equations by means of equivalent circuits." Journal of Applied Physics, Vol. 16, Mar. 1945a, p. 173.
So this is what Kron is saying:
When the generator current becomes zero the circuit is self-supporting as the negative resistances just supply the energy consumed by the positive resistances. When the generator current is positive the circuit draws energy from the source, and when the current is negative the circuit pumps back energy into the source this is known as” open-paths” and “closed-paths”. That discovery of open-paths established a second rectangular transformation matrix... which created 'lamellar' currents. This circuit uses positive resistance by an inductance and a negative resistance by a capacitor.
So here is the proof that what Kron is saying is true and the light runs itself without any power from the primary source
rainbowline.gif (1180 bytes)
I must stop right here and say, Thomas Bearden and I have been friends for 20 years during this time we have been the best of friends through thick and thin, and I always will be Tom's friend forever.You only in your life time have two or three friends that you can trust with your life and Thomas Bearden is the one. I enjoyed building everything on the bench that had to do with Tom's theories, and with a little work "THEY WORK". But you must learn how to think out side the box. As to this day Tom and I always keep discussing this field, and it will never end.
The Year was 1983," So you do not know your history"
This was 19 years ago
Thomas Bearden 1983
Toward a new electromagetics Part 4:
Tomm.jpg (31152 bytes)
On this slide, we show a theoretical scheme which several researchers have discovered and used to build simple free energy motors.
In this scheme, we drive an ordinary d.c. series motor by a two wire system from an ordinary battery. The motor produces shaft horsepower, at – say – some 30 or 40 percent efficiency, compared to the power drained from the battery. This much of the circuit is perfectly ordinary.
The trick here is to get the battery to recharge itself, without furnishing normal power to it, or expending work from the external circuit in the process.(This is the paddlewheel in the river)
To do this, recall that a charged particle in the “hooking” del-phi river moves itself.. This is true for an ion, as well as for an electron. We need only make the del-phi in correct fashion and synchronize it; specifically, we must not release the hose nozzles we utilize to produce our del-phi river or waves.(The Charge moves itself)
The inventors who have discovered this have used various variations, but here we show a common one.
First, we add an “energizer” (often referred to by various other names) to the circuit. This device makes the del-phi waves we will utilize, but does NOT make currents of electron masses. In other words, it makes pure 0-dot. It takes a little work to do this, for the energizer circuit must pump a few charges now and then. So the energizer draws a little bit of power from the motor, but not very much.(The Energizer is a unit that does not develope current, only potential charge, and no drag on the DC motor.)
Now we add a switching device, called a controller, which breaks up power to the motor in pulses. During one pulse, the battery is connected and furnishes power to the motor; during the succeeding pulse, the battery is disconnected completely from the motor and the output from the energizer is applied across the terminals of the battery. (This device is any update motor speed controller PWM)
If frequency content, spin-hole content, etc. are properly constructed by the energizer, then the ion movements in the battery reverse themselves, recharging the battery. Again remember that these ions MOVE THEMSELVES during this recharge phase. Specifically, we are NOT furnishing ordinary current to the battery, and we are not doing work on it from the energizer.(It is the proper timeing and switching after this)
If things are built properly, the battery can be made to more than recover its charge during this pulse cycle.
To prevent excess charge of the battery and overheating and destroying it, a sensor is added which senses the state of charge of the battery, and furnishes a feedback signal to the controller to regulate the length of recharge time per “power off” pulse. In other words, the system is not self-regulating.
The relation between power pulses and recharge pulses is shown on the graphs at the bottom. Note that regulation may decrease the time of recharge application of the del-phi river.
This system, if properly built and tuned, will furnish “free shaft energy” continually, without violating conservation of anenergy. Remember that the del-phi condition across the battery terminals means that space-time is suddenly curved there, and conservation of energy need no longer apply.
Again, this system is consistent with general relativity and with the fact that 0-field alone can drive a situation relativistic. We have deliberately used these facts to do direct engineering. Our “extra energy” comes from shifting phi-flux – the energy of the universal vacuum space-time – directly into ordinary energy for our use. Thus we draw on an inexhaustible source, and our device is no more esoteric than a paddlewheel in a river. Then only difference is that, in this case, we have to be clever enough to make and divert the river in the right timing sequency. ( The" Open and Closed Paths", Kron)
(c) By Thomas E Bearden 1983
So what Thomas Bearden is saying, which applies to (Kron), Is that the Motor is a variable inductor, and is in the "CLOSED PATH " this is all normal EM. However when the motor is disconnected from the battery this then becomes a "OPEN PATH " To the Energizer which is Electrostatic in Potential and no Magnetic flux cutting is needed. Another words the energizer applies no load to the Motor This system is just backwards to" Kron's" statement but does the same thing. NOTE: The electrostatic energizer must pump a few electrons during this process, but very little. ( It's called lamellar' currents
(c) John Bedini 9-15-2002
web page design: G.Bedini update: 09/12/02 |
7418c72b2646294b | IAP logo UniBonn logo
• Increase font size
• Default font size
• Decrease font size
Quantum technologies
Dieter Meschede's research group
Home AMO physics colloquia
• Roman Schnabel
• Invited speaker: Prof. Roman Schnabel
Affiliation: Universität Hannover
Title: Towards Nonclassical Systems of Massive Objects
Time and room: 17:15 lecture hall IAP
The theory of quantum mechanics is an extremely successful theory but does not as yet include gravity. A possible step towards a unified theory of quantum gravity might be the addition of the (nonrelativistic) Newtonian theory to quantum mechanics, and check whether such a modified theory predicts nontrivial effects that can be tested in an experiment. Several proposals were made in the past. Recently, it was shown that the Schrödinger-Newton equation predicts a quantum state evolution different from that described by the standard Schrödinger equation [1]. An experimental test might be feasible by using a massive mirror that is suspended as a high-Q pendulum, similar to those used in gravitational wave detectors. This talk proposes the experimental realisation of such a system on the basis of kilogram-scale fibre-suspended mirrors. The mechanical states envisioned include squeezed states as well as Einstein-Podolsky-Rosen entangled states of centre-of-mass motions.
[1] H. Yang, H. Miao, Da-shin Lee, B. Helou, and Y Chen, Macroscopic Quantum Mechanics in a Classical Spacetime, Phys. Rev. Lett. 110, 170401 (2013).
• Klas Lindfors
• Invited speaker: Prof. Klas Lindfors
Affiliation: Universität zu Köln
Title: Controlling Light With Optical Antennas
Time and room: 17:15 lecture hall IAP
Abstract: Plasmon resonant metal nanoparticles enable controlling optical fields on the nanoscale. In analogy to the radio frequency domain such particles are often called optical antennas. They have led to a multitude of breakthroughs in, e.g., enhancement of light emission and radiation engineering of single photons, as well as in sensing. I will present the results of our work on using optical antennas to enhance the optical properties of single self-assembled quantum dots and to realize optical point-to-point links. Coupling a quantum emitter to the antenna allows us to enhance both its absorption efficiency as well as emission rate. Meanwhile, point-to-point links based on optical antennas are a promising concept to transmit optical signals between nano-objects. I will show the first realization of such a link.
• Marc Bienert
• Invited speaker: Dr. Marc Bienert
Affiliation: Universität des Saarlandes
Title: Wielding The Photonic Tool: Controlling Atoms In Cavities
Time and room: 17:15 lecture hall IAP
Abstract: The scattering of photons at a single atom can serve for both
manipulation and readout of the atomic quantum state. If the atom is
placed in an optical resonator, the reshaped mode structure of the
quantized light field allows to enhance certain scattering pathes. This
can be exploited for improved cooling schemes relying on quantum
interference. Moreover, from the properties of the scattered light,
information about the motional quantum state can be inferred. I review
the basic ideas of cavity cooling of single atoms and theoretically
discuss the role of interferences of scattering pathes leading to
enhanced cooling. Furthermore, I present the spectral properties of the
emitted light from the cavity for differently shaped trapping
potentials. The analysis of the spectral form allows to deduce the
temperature of the cooled atom.
• Antoine Georges
• Invited speaker: Prof. Antoine Georges
Affiliation: Collège de France, Paris
Title: Quantum Matter From Hot Superconductors To Cold Atoms
Time and room: 17:15 h, Kleiner Hörsaal Mathematik, Wegelerstraße 10
• Antoine Georges
• Invited speaker: Prof. Antoine Georges
Affiliation: Collège de France, Paris
Title: The Coolest Transport: Ultra-Cold Atomic Gases Meet Mescoscopics And Thermoelectrics
Time and room: Monday, 15:15 h, lecture hall IAP
Open projects |
ec01cf9b8d51c19e | PCC-32806 Computer Modelling of Biomolecules
Studiepunten 6.00
Course coordinator(s)dr. RJ de Vries
Lecturer(s)prof. dr. ir. FAM Leermakers
dr. RJ de Vries
AH Westphal
prof. dr. JT Zuilhof
Examiner(s)dr. RJ de Vries
Language of instruction:
Assumed knowledge on:
Elementary mathematics; PCC-21802/23303 Introductory Thermodynamics A/B; PCC-22306 Driving Forces in Chemistry, Physics and Biology.
Continuation courses:
Thesis PCC, ORC, BIC, and more.
Computer Modelling of Biomolecules has become an indispensable tool in biomolecular science and technology, next to experiments and theory. For example, it plays a key role in the discovery of new drugs, in structure elucidation of proteins and protein complexes. Building on a basic background in physical chemistry (see assumed knowledge above), this course introduces the basic theory behind biomolecular simulation techniques such as molecular mechanics, molecular dynamics, Langevin Dynamics and Brownian Dynamics, and Monte Carlo simulations. A last part of the course deals with Quantum Chemical Modelling. While the emphasis of the course is on applications of computer simulations to biomolecules, the techniques discussed in the course apply more generally, so the course is also of interest for students that have a more general interest in computer simulations of molecules. Tutorial's with exercises are used as an aid to obtain a working understanding of the theory, and computer practicals are used to learn to work with the various simulation techniques on simple example projects.
Learning outcomes:
- judge relevant basic concepts in computer modelling of biomolecules: molecular forces, energy minimization, statistical thermodynamics, Schrödinger equation;
- analyse the formulas presented in the course text, lectures, and tutorials in a mathematically correct way, and apply them in simple computations (with due attention to dimensions and units);
- explain the essentials of computer modelling of biomolecules: molecular mechanics, molecular dynamics, Monte Carlo, Langevin Dynamics, Brownian Dynamics, Quantum Computations;
- properly interpret the results of computer simulations on relevant biomolecular systems.
- lectures;
- tutorials;
- computer practicals.
- written exam with open questions (70%);
- reports on computer labs (30%).
Each component needs a minimum mark of 5.5 to pass. The computers labs contribute to the final mark only if they are rated higher than the written exam. Interim results are valid for three years.
Alan Hinchliffe. (2008). Molecular Modelling for Beginners. Wiley. 428p. ISBN: 978-0-470-51314-9.
Keuze voor: MMLMolecular Life SciencesMScD: Spec. D Physical Chemistry6MO
MMLMolecular Life SciencesMScC: Spec. C Physical Biology6MO |
d888ad4a9193e7c5 | Alexei Boulbitch
1. Introduction: Soft Bifurcation of a Stationary Nonlinear PDE
The method described here can be applied to solve PDEs coming from different domains. However, it was initially developed to get the numerical solution of a stationary nonlinear PDE with a bifurcation. The methods application to a broader class of equations is briefly discussed at the end of the article.
The term bifurcation describes a phenomenon that occurs in some nonlinear equations that depend on one or several parameters. These equations can be algebraic, differential, integral or integro-differential. At some values of a parameter, such an equation may exhibit a fixed number of solutions. However, as soon as the parameter exceeds a critical value (referred to as the bifurcation point), the number of solutions changes and either new solutions emerge or some old ones disappear. To be specific, we discuss the case of dependence on a single parameter .
The new solutions can emerge continuously at the bifurcation point. The norm of the solution exhibits a continuous though nonsmooth dependence on the parameter at the bifurcation point (left, Figure 1). An explicit example is in Section 4.5. A bifurcation at which the solution is continuous at the bifurcation point is referred to as supercritical or soft.
The behavior of the solution in the case of a subcritical or hard bifurcation is different: the norm of the solution is finite at the bifurcation point but has a jump discontinuity there (right, Figure 1).
Figure 1. Soft versus hard bifurcation. In the case of the soft bifurcation, the solution has a continuous dependence of the solution norm on the control parameter , with a kink at the bifurcation point, . In contrast, in the case of a hard bifurcation, the solution is discontinuous at the bifurcation point.
In this article, we focus only on the case of a nonlinear PDE with soft bifurcations; some peculiarities of hard bifurcations are briefly discussed in Section 5.3.
In the most general form, a nonlinear PDE can be written as:
Here so that (1) indicates a system of nonlinear PDEs; is an -dimensional vector representing the dependent variable. The subscript indicates that is the solution of a stationary equation. Further, x is a -dimensional vector. Finally, is a real numerical parameter. The system of equations (1) is analyzed in a domain subject to zero Dirichlet boundary conditions:
Also assume that
and thus represents a trivial solution of (1, 2).
It is convenient to separate out the linear part of the operator (1), which is often (though not always) representable in the form and to write it down in the following form:
Here is a linear differential operator (such as, for example, the Laplace operator). Further, is the nonlinear part of the operator . The assumption that solves equation (1) implies that .
In its explicit form, we use the representation (4) only in Section 2.2, where we derive the critical slowing-down phenomenon. In all other cases, a general form of the dependence of equation (4) on is valid: and . Nevertheless, we stick to the form (4) for simplicity, while the generalization is straightforward.
Let us also consider an auxiliary equation
that yields the linear part of the nonlinear equation (4). Equation (5) represents the eigenvalue problem, where the are its eigenfunctions and the are its eigenvalues, indexed by the discrete variable , provided the discrete spectrum of (5) exists. Let us assume that at least a part of the spectrum of (5) is discrete. We assume here that starts from zero: . The state with is referred to as the ground state.
Without proofs, we recall a few facts from bifurcation theory [1] valid for soft bifurcations of such equations.
Assume that the trivial solution is stable for some values of . As soon as the parameter becomes equal to the smallest discrete eigenvalue of the auxiliary equation (5), this solution becomes unstable. As a result, a nontrivial solution branches off from the trivial one. In the close vicinity of the bifurcation point , this solution has the asymptotics
where is the set of eigenfunctions of the equation (5) belonging to the eigenvalue . The vector is the set of amplitudes. The scalar product stands for the expression . Here the index (where ) enumerates the eigenfunctions in the -dimensional subspace of the functional space where (5) has a nonzero solution. The exponent exceeds unity: .
There are a few methods available to determine . Listing them is out of the scope of this article. However, the simplest of these methods can be applied if there exists a generating functional enabling one to obtain the system of equations (1) as its minimum condition:
where is the variational derivative. This functional we refer to as energy in analogy with physics. Substituting the representation (6) into the energy functional and integrating out the spatial coordinates, one finds the energy as a function of the amplitudes and parameter . Minimizing the energy with respect to the amplitudes yields the system of equations for the amplitudes, referred to as the ramification equation:
Their solution is only accurate close to the bifurcation point . Assuming that the bifurcation takes place with decreasing (as is the case in the following example), one finds the typical solution for the amplitudes,
where and are real numbers to be determined using the original equation. One of the methods to analytically find these parameters is discussed in Section 3. Further analytical methods may be found in [1]. This article focuses on finding these parameters numerically (Section 4.5).
All theorems and proofs for the preceding statements, along with more general methods of the derivation of the ramification equation, can be found in [1].
2. Numerical Description of a Soft Bifurcation: A Problem and a Workaround
The bifurcation theory formulated so far is quite general: equation (1) can be differential, integral or integro-differential [1]. In what follows, we focus only on a more specific class of nonlinear partial differential equations.
The solution of the spectral system of equations (5) yields the bifurcation point ; the solutions (6) and (9) are only valid very close to this point. With increasing , the solution soon deviates from the correct behavior quantitatively, and the solution often fails to resemble (6) even qualitatively. For this reason, to get the solution at some finite that would be correct both qualitatively and quantitatively, one needs to solve (1) numerically.
In the case of a hard bifurcation, none of the machinery of the theory of soft bifurcations described so far works. Studying the bifurcation numerically often becomes the only possibility.
However, the direct numerical solution of nonlinear equations like (1) and (4) with some nonlinear solvers only returns the trivial solution for equation (4), even at the values of the parameter at which the trivial solution is unstable and a stable nontrivial solution already exists.
A plausible reason may be as follows: the solver starts to construct the PDE solution from the boundary. Here, however, the boundary condition u_(s)|_(partialOmega)=0 is already part of the trivial solution. Thus the solver appears to be placed at the true solution of the equation and is then unable to climb down from it.
To find a nontrivial solution, one needs to use a method that would start from some initial approximation that, even if rough, should be quite different from the trivial solution. Furthermore, this method should converge to the nontrivial solution by a chain of successive steps.
2.1. A Pseudo-Dynamic Equation
One can do this with the pseudo-dynamic approach formulated in the present article.
Let us introduce pseudo-time . The word “pseudo” indicates that is not real time. It just represents a technical trick that helps with the simulation. Assume now that the dependent variable is a function of both the set of spatial coordinates x and the pseudo-time: . Instead of the stationary equation (1), let us study the behavior of the pseudo-time-dependent equation:
One solves equation (10) with a suitable nonzero initial condition . Let us stress that the solution of the time-dependent equations (10) is not the same as the solution of the stationary equation (1).
One could also construct the pseudo-time-dependent equation as follows: , that is, with a minus sign in front of . The idea of such an extension is that either or exhibits a fixed point, so that , while the other diverges as . By trial and error, one chooses the equation whose solution converges to the fixed point .
The operator has not yet been specified; for definiteness let us assume that the fixed point at takes place for equation (10), that is, with the plus sign in front of .
The convergence of the solution of the dynamic equation to the fixed point enables one to apply the following strategy. Instead of the static equation (1), which is difficult to solve numerically, one simulates the quasi-dynamic equation (10) using a suitable time-stepping algorithm.
The advantage of this approach is in the possibility of starting the simulation from an arbitrary distribution chosen as the initial condition, provided it agrees with the boundary conditions. From the very beginning, such a choice takes one away from the trivial solution. The time-stepping process takes the initial condition for each step from the previous solution. The solution starting from any function gradually converges to with time if belongs to its attraction basin.
After having obtained the solution of the pseudo-time-dependent equation, one approximates the function , as at a large enough value of the pseudo-time . The meaning of the words large enough is clarified in Section 4.3.
The approach can be given a pictorial interpretation (Figure 2). In the infinite-dimensional functional space, let be an infinite set of basis functions. Then the function can be represented as
Figure 2. Schematic view of the 3D projection of the infinite-dimensional functional space with a trajectory from the initial state (blue dot) to the fixed point (red dot).
The trajectory in this space goes from the initial state to the final state , as shown by the two dots.
The time derivative represents the velocity of the motion of a point through this space, while can be regarded as a force driving this point. Thus equation (10) can be interpreted as describing a driven motion of a massless point particle with viscous friction through the functional space. In these terms, the condition (1) means that the driving force is equal to zero at some point of the space, which is the location of the fixed point of the nonlinear equation (10).
If the energy functional for equation (1) exists, one can make one further step in the interpretation (Figure 3).
Figure 3. Schematic view of the energy functional as the function of the coordinate in the functional space (A) above and (B) below the bifurcation point. The cross section of the infinite-dimensional space along a single coordinate is shown. The points show initial positions of the particle, while the arrows indicate its motion to the nearest minimum of the potential well.
Indeed, according to the definition given, equation (1) delivers a minimum to the energy functional. In this case, one can regard the dynamic equation (10) as describing a viscous motion of the massless point particle along a hypersurface in the -dimensional space, , the surface forming a potential well. The motion goes from some initial position to the minimum of the potential well as shown schematically in Figure 3. Above the bifurcation, this minimum only corresponds to the trivial solution (A) situated at . Below the bifurcation, the energy hypersurface exhibits a new configuration with new minima, while the previous minima vanish. As a result, below the bifurcation, the point particle moves from the initial position (shown by dots in Figure 3) to one of the newly formed minima (as the red and green arrows show in B). The functional space has infinite dimension, and essential features of the numeric process may involve several dimensions. The D representation displayed in Figure 3 is therefore oversimplified and only partially represents the bifurcation phenomenon.
Equation (10) can be rewritten as:
Though lacking a stationary nonlinear solver at present, Mathematica offers the option , efficiently applicable to dynamic equations like (12). This method is applied everywhere in the rest of this article.
The evident penalty of this approach is that the computation time can become large, especially in the vicinity of the bifurcation point; this peculiarity is discussed next.
2.2. A Critical Slowing Down
Close to the critical point , the relaxation of the solution to the fixed point dramatically slows down. This is referred to as critical slowing down. Its origin is illustrated in Section 4. To simplify the argument, let us consider a single equation with the one-component dependent variable that still depends on the D-dimensional coordinate . The generalization for a system of equations is straightforward, though a bit cumbersome.
According to (6), close to the bifurcation point, one can look for the solution of equation (12) in the form:
Ignore the higher-order terms, assuming that is small. Substitute (13) into the first equation (12) and linearize it. Here one should distinguish between the case at , where the linearization should be done around , and that at , where one linearizes with the center at (the second line of equation 9). In the former case, one finds
Making use of (5), one finally obtains the dynamic equation for at :
implying that , and the relaxation time has the form .
At , analogous but somewhat more lengthy arguments give the characteristic time, twice as small as that above the critical point. One comes to the relation:
One can see that the relaxation time diverges with from both sides. From the practical point of view, this suggests increasing the simulation time according to (15) near the critical point.
The result (15) is valid for equation (12), in which the linear part of the pseudo-dynamic equation has the form . That is, the parameter enters this equation only linearly, in the form of the product . In the general case , one still finds diverging relaxation time , though the factors (such as above, and below the bifurcation point) may be different.
The phenomenon of critical slowing down was first discussed in the framework of the kinetics of phase transitions [2].
3. Example: A 1D GinzburgLandau Equation
As an example, let us study the 1D PDE:
where is the dependent variable of the single coordinate . This equation exhibits a cubic nonlinearity . A classical GinzburgLandau equation only has constant coefficients for the terms and . In contrast, equation (16) possesses the inhomogeneity with
shown by the solid line in Figure 4. It thus represents a nonhomogeneous version of the GinzburgLandau equation. One can see that (16) has the trivial solution .
Figure 4. The potential from equation (17) (solid, red) and the solution of the auxiliary equation (18) (dashed, blue).
Equations (16) and (17) play an important role in the theory of the transformation of types of domain walls into one another [3].
The auxiliary equation (5) in this case takes the following form:
where enumerates the eigenvalues and eigenfunctions belonging to the discrete spectrum. One can see that equation (18) represents the Schrödinger equation [4] with potential well (17) and energy .
The exact solution of the auxiliary equation (18) is known [3, 4]. It has two discrete eigenvalues when n=0 and n=1, and the ground-state (n=0) solution has the form
which can be easily checked by direct substitution.
The energy functional generating the GinzburgLandau equation (16, 17) has the form:
Equation (6) can be written as . Substituting that into equation (20) for the energy, eliminating the term with the derivative using equation (18) and applying the Gauss theorem, one finds the energy as a function of the amplitude xi:
The ramification equation takes the form :
with the following solution for the amplitude:
4. Numerical Solution of the GinzburgLandau Equation
4.1. Pseudo-Time-Dependent Equation
Let us now look for the numerical solution of equation (16). The problem to be solved is to find the point of bifurcation and the overcritical solution at . The pseudo-time-dependent equation can be written as:
The choice of the initial condition is not critical, provided it is nonzero. The method of lines employed in the following is relatively insensitive to whether or not the initial condition precisely matches the boundary conditions. We demonstrate its solution with three initial conditions
in the in the next section.
4.2. Solution within a Finite Domain
The method of lines is applied here since it can solve nonlinear PDEs, provided these equations are dynamic, which is exactly the case within the pseudo-time-dependent approach.
To address the problem numerically, let us start with the initial conditions taken at a finite distance, rather than at infinity. The distance must be greater than the characteristic dimension of the equation, which is the distance for which exhibits a considerable variation. For the GinzburgLandau equation (16), the characteristic dimension is defined by the width of the potential for (17), which is about 1. That is, let us start with the boundaries at with . We check the quality of the result obtained with such a boundary later.
To obtain a precise enough solution, one needs to make a spatial discretization providing a step comparable to the characteristic dimension of the equation, which we just saw is of the order of . Therefore, a step that is small enough can be a few times . The value appears to be enough.
The following code solves the equation. To keep the discretization with the step comparable to the characteristic equation dimension, we chose .
To avoid conflicts with variables that may have been previously set, this notebook has the setting Evaluation Notebooks Default Context Unique to This Notebook.
According to Section 2, the time-dependent solution obtained converges to the solution of the stationary problem . In practice, however, one can instead take some finite value, provided that it is large enough.
We solve the pseudo-dynamic equation (24) with each of the three initial conditions stated before.
Further, in order to give the feeling of the method, we visualize and animate the solution, varying as well as the initial conditions. This requires a few comments. As discussed in Section 2.2, the maximum time of simulation strongly depends on . This is accounted for by introducing according to (15), where was chosen by trial so that the simulation does not last too long, but also so that the value of always ensures the convergence for any combination of and initial condition.
In the simulations, you can observe two essential features of the present method.
First, near the fixed point, the solution converges more slowly and the curve gradually appears to stop changing.
Second, near the critical point, close to , the critical slowing down (see Section 2.2) takes place, which requires considerably longer to approach the fixed point. In the animation, the curve evolves much more slowly at and , and the convergence, therefore, requires much more time.
In the , choose one of the three initial conditions and a value of . Click the button with the arrow to start the animation. The value of the current time is shown at the top-left corner. The distribution shown by the blue curve at corresponds to the initial condition, while at the animation shows its further evolution.
For each of the three initial conditions, the solution converges to the same bell-shaped curve. One can make sure that for low , the solution is nonzero. However, for greater than about 0.5, the solution is trivial.
4.3. The Solution Norm and the Convergence Control
To get an accurate solution, one needs to control the convergence as the pseudo-time increases. Here we control the convergence by analyzing the behavior of the integral
(the norm of the solution in Hilbert space) at a fixed value of the parameter as a function of . The norm is zero above the bifurcation but nonzero below it.
We show how depends on the time limit at three fixed values of the control parameter : , and , which are all below the bifurcation point .
The following code makes a nested list containing three sublists corresponding to the three values. Each sublist consists of pairs at different values of the simulation time , which increases from 10 to approximately 3000. The exponential rate of increase is chosen so as to make the plot on a semilogarithmic scale look equally spaced (Figure 5).
Figure 5. Semilogarithmic plots of the Hilbert norm of the solution for (disks), (squares) and (diamonds) depending on the simulation time, .
There is convergence for all three values of . However, the value of for which the convergence is satisfactory depends on . For example, at the solution at slightly exceeding 100 is already near convergence. Thus, with , one can be sure that the solution is satisfactory. We use this in Section 4.4 to determine the expression for accounting for the critical slowing down.
In contrast, the solution for shows some evolution even at .
4.4. The Critical Slowing Down in the Numeric Process
As we showed in Section 2.2, the value that gives satisfactory convergence depends on . To get an accurate solution, must considerably exceed the relaxation time . For example, in the calculation of the result shown in Figure 4, substituting and into (15), one finds , while the convergence only becomes good enough at , which is eight times greater than . This implies that to find an accurate solution in the close vicinity of the bifurcation point, one has to define depending on by
where is the regularization parameter.
4.5. In Search of the Bifurcation Point
The bifurcation point can be found by analyzing the same integral calculated at in (26). Let us denote . This time we study the integral as a function of the parameter .
The transition from to occurs at the bifurcation point. Accordingly, the integral at this point changes from to .
To find the critical point, bifurcation theory (23) predicts the norm to be expressed in the form:
We find the constant parameters and by fitting.
We now find the numerical solution of the equation (16) as a function of the control parameter ; the norm obtained from this solution depends on . We vary from 0.45 to to create a list consisting of pairs . The most critical region for dependence is close to the critical point, so the points there are taken to be about 10 times more dense. This list is fitted to the function (27). The list is plotted with the analytic function obtained by fitting (Figure 6).
Figure 6. Behavior of the Hilbert norm of the solution in the vicinity of the bifurcation point. Dots show the integrals (25), while the solid line indicates the result of fitting with the relation (27), yielding .
The values of the integrals at various are shown by the red dots in Figure 6, while its fitting curve is shown by the solid blue curve. The fitted value of the bifurcation point is and .
We used equation (26) for the used in the solution. However, this equation depends on the spectral value . In the present case, the value was known, which considerably simplifies the task. In general, the value of is only established in the course of the fitting procedure, requiring an iterative approach. For the first simulation, we fix some large enough value of independent of and obtain a fit. This fit gives the first guess for , which can then be used for the simulation with the equation (26). This procedure can be repeated until a satisfactory is achieved.
4.6. Varying the Boundary
To check how the choice of the boundary affects the results, we solve the problem by gradually increasing (Figure 7). (This takes some time.)
Figure 7. A double-logarithmic plot showing the convergence of the bifurcation point with
increasing .
Figure 7 displays the error in the spectral value obtained by the numerical process. As one could have expected, with the increase of , it decreases from to about .
5. Discussion
The preceding example has shown the application of the pseudo-dynamic approach for solving a 1D nonlinear PDE with zero boundary conditions that exhibits a supercritical (soft) bifurcation. That simple problem was chosen to keep the processing time as short as possible. Now possible extensions are discussed.
5. 1. Nonzero Boundary Conditions
Recall that zero boundary conditions often (if not always) represent a problem for a nonlinear solver. Starting from along the boundary, such a solver often only returns the trivial solution, since zero is, indeed, the solution of the equation considered here. For this reason, a solution to a problem like the one discussed in this article necessarily requires some specific approach that can converge to a nontrivial solution. It is for this type of equation that the approach presented here has been developed.
One should, however, make two comments.
First, there are numerous problems where the bifurcation takes place from a solution that is nonzero. The boundary condition in this case has the form . A trivial observation shows that one comes back to the original problem by the shift .
Second, the approach formulated here can be applied to nonlinear equations with no bifurcation. These equations can have boundary conditions that are either zero or nonzero. Indeed, such equations can often be solved by a nonlinear solver if one is available. Among other approaches, the present one can be applied; the nonzero boundary conditions are not an obstacle for the transition to the pseudo-time-dependent equation.
Though the present approach takes longer, in certain cases it is preferable; for example, when due to a strong nonlinearity the nonlinear solvers fail. The solver moves along the pseudo-time parameter in small steps from to , gradually passing from the initial condition to the final solution. Such a slow ramping can be stable.
5.2. Dimensionality
The space dimensionality does not limit the application of our approach (for 2D examples, see [5, 6]).
5.3. A Supercritical (Soft) versus a Subcritical (Hard) Bifurcation
In the case of a soft bifurcation, the energy can have only one type of minimum, as shown in Figure 2 describing the convergence either to the trivial or the nontrivial solution. The trajectory always flows into the minimum along the steepest slope of . The minimum is a fixed point.
An essentially different situation occurs for a hard bifurcation, when the hypersurface may have multiple minimums. Figure 8 (A) shows a schematic cross section of the infinite-dimensional functional along the plane, leaving out all other dimensions. This cross section shows the situation with minima of different types. One of these minima is more pronounced than the others. The arrows schematically indicate the trajectories in the functional space. These start from the initial conditions displayed by the dots in Figure 8 (A, B) and converge to the minima (Figure 8 A). The green arrow shows the convergence of the process to the principal minimum, while the red one converges to a secondary minimum.
Figure 8. Schematic view of the energy functional along a direction of the functional space, where it exhibits a metastable minimum (A). The green point schematically indicates the initial condition starting from which the solution converges to the one corresponding to the principal energy minimum (green arrow), while the red dot shows the initial condition leading to the convergence to the secondary minimum. (B) The trajectory ends at an inflection point.
As a result, depending on the choice of initial condition, some solution trajectories may end up at a fixed point that is a secondary minimum rather than in the main one.
Also, keep in mind that the dimension of the functional space is infinite and can have many unobvious secondary minima.
There can also be inflection and saddle points of the energy hypersurface (Figure 8 B). The trajectory completely stops at such a point.
It is a fundamental question whether or not such secondary fixed points as well as the inflection points belong to the problem under study. The answer is not straightforward. One should look for such an answer based on the origin of the equation.
Let us also mention possible gently sloping valleys in the energy relief. In this case, the motion along such a shallow slope may appear practically indistinguishable from an asymptotic falling into a fixed point during the numerical process.
6. Summary
This article offers an approach to solve nonlinear stationary partial differential equations numerically. It is especially useful in the case of equations with zero boundary conditions that have both a trivial solution and nontrivial solutions. The approach is based on solving a pseudo-time-dependent equation instead of the stationary one, the initial condition being different from zero. Then the solver can avoid sticking to the trivial solution and is able to converge to a nontrivial solution. However, the penalty is increased simulation time.
[1] M. M. Vainberg and V. A. Trenogin, Theory of Branching of Solutions of Non-linear Equations, Leyden, Netherlands: Noordhoff International Publishing, 1974.
[2] E. M. Lifshitz and L. P. Pitaevskii, Physical Kinetics: Course of Theoretical Physics, Vol. 10, Oxford, UK: Pergamon, 1981 Chapter 101.
[3] A. A. Bullbich and Yu. M. Gufan, Phase Transitions in Domain Walls, Ferroelectrics, 98(1), 1989 pp. 277290. doi:10.1080/00150198908217589.
[4] L. D. Landau and E. M. Lifshitz, Quantum Mechanics: Course of Theoretical Physics, Vol. 3, 3rd ed., Oxford, UK: Butterworth-Heinemann, 2003.
[5] A. Boulbitch and A. L. Korzhenevskii, Field-Theoretical Description of the Formation of a Crack Tip Process Zone, European Physical Journal B, 89(261), 2016 pp. 118. doi:10.1140/epjb/e2016-70426-6.
[6] A. Boulbitch, Yu. M. Gufan and A. L. Korzhenevskii, Crack-Tip Process Zone as a Bifurcation Problem, Physics Review E, 96(013005), 2017 pp. 119. doi:10.1103/PhysRevE.96.013005.
A. Boulbitch, Pseudo-Dynamic Approach to the Numerical Solution of Nonlinear Stationary Partial Differential Equations, The Mathematica Journal, 2018.
About the Author
Alexei Boulbitch graduated from Rostov University (USSR) in 1980 and obtained his Ph.D. in theoretical solid-state physics in 1988 from this university. In 1990 he moved to the University of Picardie (France) and later to the Technical University of Munich (Germany). The Technical University of Munich granted him his habilitation degree in theoretical biophysics in 2001. His areas of interest are bacteria, biomembranes, cells, defects in crystals, phase transitions, physics of fracture (currently active), polymers and sensors (currently active). He presently works in industrial physics with a focus on sensors and gives lectures at the University of Luxembourg.
Alexei Boulbitch
Zum Waldeskühl 12
54298 Igel |
e55f51c16d55b337 | Partial differential equation
Partial differential equation
A visualisation of a solution to the heat equation on a two dimensional plane
In mathematics, partial differential equations (PDE) are a type of differential equation, i.e., a relation involving an unknown function (or functions) of several independent variables and their partial derivatives with respect to those variables. PDEs are used to formulate, and thus aid the solution of, problems involving functions of several variables.
PDEs are for example used to describe the propagation of sound or heat, electrostatics, electrodynamics, fluid flow, and elasticity. These seemingly distinct physical phenomena can be formalized identically (in terms of PDEs), which shows that they are governed by the same underlying dynamic. PDEs find their generalization in stochastic partial differential equations. Just as ordinary differential equations often model dynamical systems, partial differential equations often model multidimensional systems.
A partial differential equation (PDE) for the function u(x1,...xn) is an equation of the form
F(x_1, \cdots x_n,u,\frac{\partial}{\partial x_1}u, \cdots \frac{\partial}{\partial x_n}u,\frac{\partial^2}{\partial x_1 \partial x_1}u, \frac{\partial^2}{\partial x_1 \partial x_2}u, \cdots ) = 0 \,
If F is a linear function of u and its derivatives, then the PDE is called linear. Common examples of linear PDEs include the heat equation, the wave equation and Laplace's equation.
A relatively simple PDE is
\frac{\partial}{\partial x}u(x,y)=0\, .
This relation implies that the function u(x,y) is independent of x. Hence the general solution of this equation is
u(x,y) = f(y),\,
which has the solution
u(x) = c,\,
Existence and uniqueness
An example of pathological behavior is the sequence of Cauchy problems (depending upon n) for the Laplace equation
\frac{\part^2 u}{\partial x^2} + \frac{\part^2 u}{\partial y^2}=0,\,
with boundary conditions
u(x,0) = 0, \,
\frac{\partial u}{\partial y}(x,0) = \frac{\sin n x}{n},\,
where n is an integer. The derivative of u with respect to y approaches 0 uniformly in x as n increases, but the solution is
u(x,y) = \frac{(\sinh ny)(\sin nx)}{n^2}.\,
This solution approaches infinity if nx is not an integer multiple of π for any non-zero value of y. The Cauchy problem for the Laplace equation is called ill-posed or not well posed, since the solution does not depend continuously upon the data of the problem. Such ill-posed problems are not usually satisfactory for physical applications.
u_x = {\partial u \over \partial x}
u_{xy} = {\part^2 u \over \partial y\, \partial x} = {\partial \over \partial y } \left({\partial u \over \partial x}\right).
Especially in (mathematical) physics, one often prefers the use of del (which in cartesian coordinates is written \nabla=(\part_x,\part_y,\part_z)\, ) for spatial derivatives and a dot \dot u\, for time derivatives. For example, the wave equation (described below) can be written as
\ddot u=c^2\nabla^2u\, (physics notation),
\ddot u=c^2\Delta u\, (math notation),
where Δ is the Laplace operator.
Heat equation in one space dimension
The equation for conduction of heat in one dimension for a homogeneous body has the form
u_t = \alpha u_{xx} \,
where u(t,x) is temperature, and α is a positive constant that describes the rate of diffusion. The Cauchy problem for this equation consists in specifying u(0,x) = f(x), where f(x) is an arbitrary function.
General solutions of the heat equation can be found by the method of separation of variables. Some examples appear in the heat equation article. They are examples of Fourier series for periodic f and Fourier transforms for non-periodic f. Using the Fourier transform, a general solution of the heat equation has the form
u(t,x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} F(\xi) e^{-\alpha \xi^2 t} e^{i \xi x} d\xi, \,
where F is an arbitrary function. To satisfy the initial condition, F is given by the Fourier transform of f, that is
F(\xi) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(x) e^{-i \xi x}\, dx. \,
If f represents a very small but intense source of heat, then the preceding integral can be approximated by the delta distribution, multiplied by the strength of the source. For a source whose strength is normalized to 1, the result is
F(\xi) = \frac{1}{\sqrt{2\pi}}, \,
and the resulting solution of the heat equation is
u(t,x) = \frac{1}{2\pi} \int_{-\infty}^{\infty}e^{-\alpha \xi^2 t} e^{i \xi x} d\xi. \,
This is a Gaussian integral. It may be evaluated to obtain
u(t,x) = \frac{1}{2\sqrt{\pi \alpha t}} \exp\left(-\frac{x^2}{4 \alpha t} \right). \,
This result corresponds to the normal probability density for x with mean 0 and variance 2αt. The heat equation and similar diffusion equations are useful tools to study random phenomena.
Wave equation in one spatial dimension
The wave equation is an equation for an unknown function u(t, x) of the form
u_{tt} = c^2 u_{xx}. \,
Here u might describe the displacement of a stretched string from equilibrium, or the difference in air pressure in a tube, or the magnitude of an electromagnetic field in a tube, and c is a number that corresponds to the velocity of the wave. The Cauchy problem for this equation consists in prescribing the initial displacement and velocity of a string or other medium:
u(0,x) = f(x), \,
u_t(0,x) = g(x), \,
where f and g are arbitrary given functions. The solution of this problem is given by d'Alembert's formula:
u(t,x) = \frac{1}{2} \left[f(x-ct) + f(x+ct)\right] + \frac{1}{2c}\int_{x-ct}^{x+ct} g(y)\, dy. \,
This formula implies that the solution at (t,x) depends only upon the data on the segment of the initial line that is cut out by the characteristic curves
x - ct = \hbox{constant,} \quad x + ct = \hbox{constant}, \,
that are drawn backwards from that point. These curves correspond to signals that propagate with velocity c forward and backward. Conversely, the influence of the data at any given point on the initial line propagates with the finite velocity c: there is no effect outside a triangle through that point whose sides are characteristic curves. This behavior is very different from the solution for the heat equation, where the effect of a point source appears (with small amplitude) instantaneously at every point in space. The solution given above is also valid if t is negative, and the explicit formula shows that the solution depends smoothly upon the data: both the forward and backward Cauchy problems for the wave equation are well-posed.
Spherical waves
Spherical waves are waves whose amplitude depends only upon the radial distance r from a central point source. For such waves, the three-dimensional wave equation takes the form
u_{tt} = c^2 \left[u_{rr} + \frac{2}{r} u_r \right]. \,
This is equivalent to
(ru)_{tt} = c^2 \left[(ru)_{rr} \right],\,
and hence the quantity ru satisfies the one-dimensional wave equation. Therefore a general solution for spherical waves has the form
u(t,r) = \frac{1}{r} \left[F(r-ct) + G(r+ct) \right],\,
where F and G are completely arbitrary functions. Radiation from an antenna corresponds to the case where G is identically zero. Thus the wave form transmitted from an antenna has no distortion in time: the only distorting factor is 1/r. This feature of undistorted propagation of waves is not present if there are two spatial dimensions.
Laplace equation in two dimensions
The Laplace equation for an unknown function of two variables φ has the form
φxx + φyy = 0.
Solutions of Laplace's equation are called harmonic functions.
Connection with holomorphic functions
Solutions of the Laplace equation in two dimensions are intimately connected with analytic functions of a complex variable (a.k.a. holomorphic functions): the real and imaginary parts of any analytic function are conjugate harmonic functions: they both satisfy the Laplace equation, and their gradients are orthogonal. If f=u+iv, then the Cauchy–Riemann equations state that
u_x = v_y, \quad v_x = -u_y,\,
and it follows that
u_{xx} + u_{yy} = 0, \quad v_{xx} + v_{yy}=0. \,
Conversely, given any harmonic function in two dimensions, it is the real part of an analytic function, at least locally. Details are given in Laplace equation.
A typical boundary value problem
A typical problem for Laplace's equation is to find a solution that satisfies arbitrary values on the boundary of a domain. For example, we may seek a harmonic function that takes on the values u(θ) on a circle of radius one. The solution was given by Poisson:
\varphi(r,\theta) = \frac{1}{2\pi} \int_0^{2\pi} \frac{1-r^2}{1 +r^2 -2r\cos (\theta -\theta')} u(\theta')d\theta'.\,
Petrovsky (1967, p. 248) shows how this formula can be obtained by summing a Fourier series for φ. If r<1, the derivatives of φ may be computed by differentiating under the integral sign, and one can verify that φ is analytic, even if u is continuous but not necessarily differentiable. This behavior is typical for solutions of elliptic partial differential equations: the solutions may be much more smooth than the boundary data. This is in contrast to solutions of the wave equation, and more general hyperbolic partial differential equations, which typically have no more derivatives than the data.
Euler–Tricomi equation
The Euler–Tricomi equation is used in the investigation of transonic flow.
u_{xx} \, =xu_{yy}.
Advection equation
The advection equation describes the transport of a conserved scalar ψ in a velocity field {\bold u}=(u,v,w). It is:
\psi_t+(u\psi)_x+(v\psi)_y+(w\psi)_z \, =0.
If the velocity field is solenoidal (that is, \nabla\cdot{\bold u}=0), then the equation may be simplified to
\psi_t+u\psi_x+v\psi_y+w\psi_z \, =0.
In the one-dimensional case where u is not constant and is equal to ψ, the equation is referred to as Burgers' equation.
Ginzburg–Landau equation
The Ginzburg–Landau equation is used in modelling superconductivity. It is
iu_t+pu_{xx} +q|u|^2u \, =i\gamma u
where p,q\in\mathbb{C} and \gamma\in\mathbb{R} are constants and i is the imaginary unit.
The Dym equation
The Dym equation is named for Harry Dym and occurs in the study of solitons. It is
u_t \, = u^3u_{xxx}.
Initial-boundary value problems
Many problems of mathematical physics are formulated as initial-boundary value problems.
Vibrating string
If the string is stretched between two points where x=0 and x=L and u denotes the amplitude of the displacement of the string, then u satisfies the one-dimensional wave equation in the region where 0<x<L and t is unlimited. Since the string is tied down at the ends, u must also satisfy the boundary conditions
u(t,0)=0, \quad u(t,L)=0, \,
as well as the initial conditions
u(0,x)=f(x), \quad u_t(0,x)=g(x). \,
The method of separation of variables for the wave equation
u_{tt} = c^2 u_{xx}, \,
leads to solutions of the form
u(t,x) = T(t) X(x),\,
T'' + k^2 c^2 T=0, \quad X'' + k^2 X=0,\,
where the constant k must be determined. The boundary conditions then imply that X is a multiple of sin kx, and k must have the form
k= \frac{n\pi}{L}, \,
where n is an integer. Each term in the sum corresponds to a mode of vibration of the string. The mode with n=1 is called the fundamental mode, and the frequencies of the other modes are all multiples of this frequency. They form the overtone series of the string, and they are the basis for musical acoustics. The initial conditions may then be satisfied by representing f and g as infinite sums of these modes. Wind instruments typically correspond to vibrations of an air column with one end open and one end closed. The corresponding boundary conditions are
X(0) =0, \quad X'(L) = 0.\,
The method of separation of variables can also be applied in this case, and it leads to a series of odd overtones.
The general problem of this type is solved in Sturm–Liouville theory.
Vibrating membrane
If a membrane is stretched over a curve C that forms the boundary of a domain D in the plane, its vibrations are governed by the wave equation
\frac{1}{c^2} u_{tt} = u_{xx} + u_{yy}, \,
if t>0 and (x,y) is in D. The boundary condition is u(t,x,y) = 0 if (x,y) is on C. The method of separation of variables leads to the form
u(t,x,y) = T(t) v(x,y),\,
which in turn must satisfy
\frac{1}{c^2}T'' +k^2 T=0, \,
v_{xx} + v_{yy} + k^2 v =0.\,
The latter equation is called the Helmholtz Equation. The constant k must be determined to allow a non-trivial v to satisfy the boundary condition on C. Such values of k2 are called the eigenvalues of the Laplacian in D, and the associated solutions are the eigenfunctions of the Laplacian in D. The Sturm–Liouville theory may be extended to this elliptic eigenvalue problem (Jost, 2002).
Other examples
The Schrödinger equation is a PDE at the heart of non-relativistic quantum mechanics. In the WKB approximation it is the Hamilton–Jacobi equation.
Except for the Dym equation and the Ginzburg–Landau equation, the above equations are linear in the sense that they can be written in the form Au = f for a given linear operator A and a given function f. Other important non-linear equations include the Navier–Stokes equations describing the flow of fluids, and Einstein's field equations of general relativity.
Also see the list of non-linear partial differential equations.
Some linear, second-order partial differential equations can be classified as parabolic, hyperbolic or elliptic. Others such as the Euler–Tricomi equation have different types in different regions. The classification provides a guide to appropriate initial and boundary conditions, and to smoothness of the solutions.
Equations of first order
Equations of second order
Assuming uxy = uyx, the general second-order PDE in two independent variables has the form
Au_{xx} + 2Bu_{xy} + Cu_{yy} + \cdots = 0,
where the coefficients A, B, C etc. may depend upon x and y. If A2 + B2 + C2 > 0 over a region of the xy plane, the PDE is second-order in that region. This form is analogous to the equation for a conic section:
Ax^2 + 2Bxy + Cy^2 + \cdots = 0.
2. B^2 - AC = 0\, : equations that are parabolic at every point can be transformed into a form analogous to the heat equation by a change of independent variables. Solutions smooth out as the transformed time variable increases. The Euler–Tricomi equation has parabolic type on the line where x=0.
3. B^2 - AC \, > 0 : hyperbolic equations retain any discontinuities of functions or derivatives in the initial data. An example is the wave equation. The motion of a fluid at supersonic speeds can be approximated with hyperbolic PDEs, and the Euler–Tricomi equation is hyperbolic where x>0.
If there are n independent variables x1, x2 , ..., xn, a general linear partial differential equation of second order has the form
L u =\sum_{i=1}^n\sum_{j=1}^n a_{i,j} \frac{\part^2 u}{\partial x_i \partial x_j} \quad \hbox{ plus lower order terms} =0. \,
The classification depends upon the signature of the eigenvalues of the coefficient matrix.
1. Elliptic: The eigenvalues are all positive or all negative.
2. Parabolic : The eigenvalues are all positive or all negative, save one that is zero.
3. Hyperbolic: There is only one negative eigenvalue and all the rest are positive, or there is only one positive eigenvalue and all the rest are negative.
4. Ultrahyperbolic: There is more than one positive eigenvalue and more than one negative eigenvalue, and there are no zero eigenvalues. There is only limited theory for ultrahyperbolic equations (Courant and Hilbert, 1962).
Systems of first-order equations and characteristic surfaces
The classification of partial differential equations can be extended to systems of first-order equations, where the unknown u is now a vector with m components, and the coefficient matrices Aν are m by m matrices for \nu=1, \dots,n. The partial differential equation takes the form
Lu = \sum_{\nu=1}^{n} A_\nu \frac{\partial u}{\partial x_\nu} + B=0, \,
\varphi(x_1, x_2, \ldots, x_n)=0, \,
Q\left(\frac{\part\varphi}{\partial x_1}, \ldots,\frac{\part\varphi}{\partial x_n}\right) =\det\left[\sum_{\nu=1}^nA_\nu \frac{\partial \varphi}{\partial x_\nu}\right]=0.\,
2. A first-order system is hyperbolic at a point if there is a space-like surface S with normal ξ at that point. This means that, given any non-trivial vector η orthogonal to ξ, and a scalar multiplier λ, the equation
Q(\lambda \xi + \eta) =0, \,
has m real roots λ1, λ2, ..., λm. The system is strictly hyperbolic if these roots are always distinct. The geometrical interpretation of this condition is as follows: the characteristic form Q(ζ)=0 defines a cone (the normal cone) with homogeneous coordinates ζ. In the hyperbolic case, this cone has m sheets, and the axis ζ = λ ξ runs inside these sheets: it does not intersect any of them. But when displaced from the origin by η, this axis intersects every sheet. In the elliptic case, the normal cone has no real sheets.
Equations of mixed type
u_{xx} \, = xu_{yy}
Infinite-order PDEs in quantum mechanics
Weyl quantization in phase space leads to quantum Hamilton's equations for trajectories of quantum particles. Those equations are infinite-order PDEs. However, in the semiclassical expansion one has a finite system of ODEs at any fixed order of \hbar. The equation of evolution of the Wigner function is infinite-order PDE also. The quantum trajectories are quantum characteristics with the use of which one can calculate the evolution of the Wigner function.
Analytical methods to solve PDEs
Separation of variables
In the method of separation of variables, one reduces a PDE to a PDE in fewer variables, which is an ODE if in one variable – these are in turn easier to solve.
Method of characteristics
More generally, one may find characteristic surfaces.
Integral transform
An integral transform may transform the PDE to a simpler one, in particular a separable PDE. This corresponds to diagonalizing an operator.
Change of variables
Often a PDE can be reduced to a simpler form with a known solution by a suitable change of variables. For example the Black–Scholes PDE
is reducible to the heat equation
\frac{\partial u}{\partial \tau} = \frac{\partial^2 u}{\partial x^2}
by the change of variables (for complete details see Solution of the Black Scholes Equation)
V(S,t) = K v(x,\tau)\,
x = \ln(S/K)\,
\tau = \frac{1}{2} \sigma^2 (T - t)
v(x,\tau)=\exp(-\alpha x-\beta\tau) u(x,\tau).\,
Fundamental solution
Superposition principle
Because any superposition of solutions of a linear, homogeneous PDE is again a solution, the particular solutions may then be combined to obtain more general solutions.
Methods for non-linear equations
See also the list of nonlinear partial differential equations.
The method of characteristics (Similarity Transformation method) can be used in some very special cases to solve partial differential equations.
Lie Group Methods
Numerical methods to solve PDEs
The three most widely used numerical methods to solve PDEs are the finite element method (FEM), finite volume methods (FVM) and finite difference methods (FDM). The FEM has a prominent position among these methods and especially its exceptionally efficient higher-order version hp-FEM. Other versions of FEM include the generalized finite element method (GFEM), extended finite element method (XFEM), spectral finite element method (SFEM), meshfree finite element method, discontinuous Galerkin finite element method (DGFEM), etc.
Finite Element Method
Finite Difference Method
Finite Volume Method
See also
• Lewy, Hans (1957), "An example of a smooth linear partial differential equation without solution", Annals of Mathematics, 2nd Series 66 (1): 155–158 .
External links
Wikimedia Foundation. 2010.
Look at other dictionaries:
• partial differential equation — Math. a differential equation containing partial derivatives. Cf. ordinary differential equation. [1885 90] * * * In mathematics, an equation that contains partial derivatives, expressing a process of change that depends on more than one… … Universalium
• partial differential equation — noun a differential equation involving a functions of more than one variable • Hypernyms: ↑differential equation … Useful english dictionary
• partial differential equation — noun Date: 1845 a differential equation containing at least one partial derivative … New Collegiate Dictionary
• partial differential equation — noun a differential equation that involves the partial derivatives of a function of several variables … Wiktionary
• partial differential equation — noun Mathematics an equation containing one or more partial derivatives … English new terms dictionary
• Hyperbolic partial differential equation — In mathematics, a hyperbolic partial differential equation is usually a second order partial differential equation (PDE) of the form :A u {xx} + 2 B u {xy} + C u {yy} + D u x + E u y + F = 0 with: det egin{pmatrix} A B B C end{pmatrix} = A C B^2 … Wikipedia
• First order partial differential equation — In mathematics, a first order partial differential equation is a partial differential equation that involves only first derivatives of the unknown function of n variables. The equation takes the form: F(x 1,ldots,x n,u,u {x 1},ldots u {x n}) =0 … Wikipedia
• List of partial differential equation topics — This is a list of partial differential equation topics, by Wikipedia page. Contents 1 General topics 2 Specific partial differential equations 3 Numerical methods for PDEs 4 … Wikipedia
• Parabolic partial differential equation — A parabolic partial differential equation is a type of second order partial differential equation, describing a wide family of problems in science including heat diffusion and stock option pricing. These problems, also known as evolution problems … Wikipedia
• Dispersive partial differential equation — In mathematics, a dispersive partial differential equation or dispersive PDE is a partial differential equation that is dispersive. In this context, dispersion means that waves of different wavelength propagate at different phase velocities.… … Wikipedia
Share the article and excerpts
Direct link
Do a right-click on the link above
and select “Copy Link”
|
0fd897912d2c39e5 | Time and again
A systematic analysis of
the foundations of physics
Marinus Dirk Stafleu
First edition 1980:
Wedge Publishing Foundation, Toronto, Canada
Sacum Beperk: Bloemfontein, South Africa
ISBN 0 88906 108 4
© 1980 M.D.Stafleu
Second, revised edition 2015, 2019
© 2019 M.D.Stafleu
Weeshuislaan 31
3701 JV Zeist, Netherlands
1. Framework
2. Number and space
3. Metric and measurement
4. The dynamic development of kinematics
5. Interaction
6. Irreversibility
7. Wave packets
8. Individuality and probability
9. Probability in quantum physics
10. Structures of individuality
11. Physical characters
Time and Again was first published in 1980, and is now entirely revised. It is intended to prove that the natural sciences, as far as their foundations are concerned, is little more than time keeping, if time is understood as a lawful pattern of relations between things and events. This pattern forms the subject matter of this study.
Time and again, philosophers of science have been in search of a unifying principle in the foundations of physics. Time is not such a unifying principle. It is a diversifying principle. I do not wish to find unity, but to account for the diversity of nature.
Time and again I shall argue that temporal relations are not based on some conventions, and that the laws of physics are not merely convenient patterns of thought. Arguments will be provided for the view that physics is a dynamic endeavour, intended to open up the creation by discovering laws and applying these to physical reality.
Time and again philosophers have tried to present the foundations of science on an a priori basis. This book wants to discover these by a close scrutiny of the physical sciences and their history. Only one thing will be taken for granted: the lawfulness of the creation. Different philosophical schools can be distinguished by the way they account for this lawfulness.
Time and again aims to study the foundations of physics in a systematic way. Hence it starts with exposing a hypothetical framework derived from Herman Dooyeweerd’s and Dirk Vollenhoven’s philosophy of the cosmonomic idea, based on Christian principles. This is well-known for its law spheres or modal aspects (which I call relation frames), expressing the idea of modal diversity. The concept of relation frames is supplemented by the concept of structures of individuality. Like the relation frames, structures have a law side and a subject side, and I call the law side characters. Time and again investigates the dynamic development of the mathematical and physical aspects of reality and their mutual projections. These are called retrocipations if they refer to preceding relation frames, and anticipations if they refer to succeeding aspects. Both act as a driving force in the modal dynamic opening process, anticipations as a pull, retrocipations as a push. Vollenhoven, Dooyeweerd and their disciples usually analyse each modal aspect conceptually, by pointing out its meaning nucleus and its analogies (retrocipations and anticipations). This book sets out to discuss each relation frame by extensively investigating the relations which it determines, both subject-subject and subject-object relations.
After nearly forty years, it appears to be worthwhile to revise Time and again, first published in 1980. This second edition, part I of Laws for dynamic development, is foremost an update. Physics has developed considerably, new books and papers have appeared, and a modernization of style and terminology is much in need. Apart from that, it has not changed very much. Therefore, I do not hesitate to retain the book’s title.
As a historical companion to Time and Again, I wrote Theories at work, On the structure and functioning of theories in science, in particular during the Copernican revolution (Lanham 1987), again in cooperation with the Institute for Christian Studies at Toronto. This book is now thoroughly revised, under a new title: Theory and experiment, Christian philosophy of science in a historical context (2016). A more philosophical book is Nature and freedom, Philosophy of nature, Natural theology, Enlightenment and Romanticism (2018)
Chapter 1
1.1. Foundations research
1.2. Three basic distinctions
1.3. Law and subject
1.4. Typicality and modality
1.5. The modal aspects
1.6. Subjects and objects
1.7. The opening-process
1.8. Science and religion
Time and again
1.1. Foundations research
Although there are many handbooks and textbooks of physics as well as numerous monographs and papers on special topics, until recently, there have been few books dealing with the structure and development of physics in a manner which goes beyond a mere commentary on its methods. Many of the texts, of course, are extremely important for the understanding of physics, and are fascinating because of new vistas explored or admirable because of clarity in expounding older views. However, there remains surprisingly few investigations into the basic structure and coherence of the physical sciences.
There is, perhaps, a historical explanation for this. Influenced by Immanuel Kant, the 19th-century German Naturphilosophie assumed that the foundations of physics could be derived from immediately evident truths being a priori, transcendent and necessary. It was thought that these truths could be understood without the need of experimental verification. Georg Hegel is notorious for making attempts to build a structure of physics on such speculative foundations. In time, however, it became clear that many of these self-evident truths were in fact false. In reaction, many late 19th- and 20th-century physicists rejected outright any a priori philosophical bias for their work - and willy-nilly became adherents of another philosophy, usually some variant of positivism (neo-positivism, Vienna school, analytical philosophy, instrumentalism, operationalism, conventionalism, social-constructivism). Assuming that the content of science is ‘positive fact’ which must be taken for granted, whereas the structure of science is determined by its methods, most positivist philosophers are interested only in the latter.[1] Hence for positivists, philosophy of science is not a matter of ontology or epistemology, but rather a matter of methodology.
The study of the foundations of physics has traditionally been called metaphysics, but, since the beginning of the 19th century, this term has become discredited because of its speculative implications. Currently, this kind of study is usually referred to as foundations research. The critical-realist philosopher of science Mario Bunge defines its goal as being twofold:
‘To perform a critical analysis of the existing theoretical foundations (of physics), and to reconstruct them in a more explicit and cogent way’.[2]
The critical analysis has three tasks:
‘(a) To examine the philosophical presuppositions of physics;
(b) To discuss the status of key concepts, formulas, and procedures of physics;
(c) To shrink or even to eliminate vagueness, inconsistency, and other blemishes.’[3]
Similarly, the task of reconstruction, according to Bunge, has three aspects:
‘(a) To bring order to various fields of physics by axiomatizing their cores;
(b) To examine the various proposed axiomatic foundations;
(c) To discover the relations among the various physical theories.’[4]
For Bunge, the most important tool of foundations research is axiomatization. In this context,
‘ ... ‘axiom’ means initial assumption not self-evident pronouncement. There need be nothing intuitive and there is nothing final in an axiom ...’[5] Axiomatization of physical theories’ ... does nothing but organize and complete what has been a more or less disorderly and incomplete body of knowledge: it exhibits the structure of the theory and makes its meaning more precise.’[6]
However, since axiomatization is more an investigation of theories than of physics, it is unlikely that foundations research can be exhausted by formulating axioms. In the first place, axiomatization can only be applied to partial theories[7] such as classical mechanics, classical electromagnetism, thermodynamics, special and general relativity - to mention fields in which this type of foundations research has been carried out more or less successfully.[8] Moreover,
‘... a conceptual system such as Euclidean geometry may be subjected to innumerable axiomatizations, all hazy in different ways.’[9]
In part I of this book I shall not be primarily interested in partial theories, and I shall make use only occasionally of available axiomatizations. My initial focus will be with an ordering scheme of all aspects of the physical sciences - i.e., with the third of Bunge’s ‘constructive tasks’ of foundations research. It is very doubtful whether such an ordering scheme could be axiomatized in any sense, since any axiomatization would itself probably depend on such a scheme, whether explicitly recognized, or implicitly assumed. In our discussion, the partial theories neither are placed alongside one another, nor will they be deductively subsumed. They turn out to be interdependent. It is especially the dynamic development of this mutual dependence which will be our subject matter.
A second reason for rejecting axiomatization as the main tool of foundations research is this: any modern axiomatization system familiar to me relies heavily on set theory, as well as on a formal logic making use of set-theoretic methods. This appears to betray a strong influence of Aristotelian philosophy of science, according to which science means the designation of classes and their mutual relations. This Aristotelian influence may be spurious; nevertheless, the approach relies heavily on logic, the laws of which are supposed to be true (if only ‘vacuously true’) and a priori valid tools in foundations research. I shall consider set theory to be a mathematical theory (2.1), and insofar as logic makes use of it, logic is projected on mathematics. Thus, sets and classes as mathematical entities should find a place within the general ordering scheme to be sought. This implies that set theory and its dependent, axiomatization, cannot be accepted as the basis of our research into the foundations of physics, though both will play an important role in part I of this book.[10]
From the above quotations it should be clear that Bunge’s extreme emphasis on logical methods does not imply a purely deductive approach to physics, for his axioms must be found in existing physical theories. Still, he seems to adhere to the medieval idea that everything special is contained in the general. Enrico Cantore’s ‘inductive-genetic’ approach presents a somewhat different view:
‘First, the approach should be inductive ... the philosophical approach to science, to be successful, should concentrate on the detailed study of individual, fully developed theories. Secondly, the approach has to be genetic. Each scientific theory arises out of a slowly growing body of information. Hence the nature of the scientific endeavour and its achievements cannot be properly realized unless one follows the developments of individual theories as they gradually unfold and develop in time.’[11]
This points out my third objection to Bunge’s position: the historical development of a theory must also be accounted for in foundations research.
Finally, I wish to direct a few comments to Bunge’s first critical task of foundations research, i.e., to examine the philosophical presuppositions of physics. First, it must be emphasized that there exists no unique set of philosophical presuppositions. Second, no examination of such presuppositions can itself be philosophically neutral. Bunge himself seems to be more clear about the philosophies which he rejects (positivism, operationalism) than he is about his own philosophical position (realistic objectivism, or critical realism[12]). This vagueness about one’s own philosophy is not unusual among workers in foundations research. Since the beginning of this century it has become abundantly clear that mathematics and physics, and more specifically, investigations into their foundations, are not free from philosophical assumptions, which, in turn, depend on one’s world view. Recognition of this has led to a more or less peaceful coexistence of different philosophical traditions in mathematics (logicism, formalism, and intuitionism)[13] and in physics (neo-positivism, operationalism, realism, conventionalism, materialism, phenomenalism, and postmodern constructivism).[14] A complete criticism of any of these philosophical systems would be out of the question, but, at times, I shall have occasion to confront my views with those of others.
Mission statement
Understanding the structure of the physical sciences requires a philosophical system which makes possible a systematic analysis of the foundations of physics, including its history. The philosophical position from which Laws for dynamic development is written is the philosophy of the cosmonomic idea, developed by Herman Dooyeweerd and Dirk Vollenhoven at Amsterdam, during the second quarter of the 20th century.[15] In contrast to philosophical fashion, this philosophy does not degenerate into a kind of methodology. Growing out of the reformed biblical ‘ground motive’ of creation, fall into sin, and redemption through Jesus Christ in the communion with the Holy Spirit, it is a rather complicated attempt to account for the full complexity of created reality. Not only is this philosophy a systematic investigation into the structure of created reality and human knowledge thereof, but it also tries to account for the temporal development of created reality. For readers of this book it would be helpful to have prior knowledge of this philosophy. However, since only part of its elaborate system is needed for our analysis of the structure of physics, and since this will be elaborated in the course of this book, such prior knowledge is not strictly necessary. In this introductory chapter an outline will be given of the general framework within which the discussion takes place. I do not wish to present this philosophy as an a priori truth; on the contrary, to a large extent, its applicability must be demonstrated by studies such as the one undertaken in this book. Hence I invite the reader to understand this introductory chapter as a provisional outline of a working hypothesis which is to be tested in the following chapters.
Time and again
1.2. Three basic distinctions
Three central, recurring themes can be recognized in the history of scientific philosophy: the search for truth, the search for order, and the search for structure. The first is mainly a philosophical concern, and deals with the relation of laws and which is subjected to them, the status of law (the nominalism-realism controversy), the possibility of human knowledge, and the methodology of science. Its central problem is to account for the lawfulness of creation. The search for order and structure forms the core of science, and here one deals with basic questions such as: Are there general modes of experience which provide an order for everything within the creation, and if so, which are these universal orders of relation? How can stable things exist, and how can they change? The question of structure already surfaced in Greek philosophy and is still prominent in modern physics and biology, whereas the problem of lawful order did not appear until post-Renaissance science. These three themes, though they cannot be treated separately, are
irreducible to each other, and they lead to the introduction of three basic distinctions which form the skeleton of my philosophical theory.
(a) The distinction of law and subject (1.3) is basic to all sciences, though it is not always explicitly recognized as such. Every science worth its name investigates some kind of regularity, which I shall call laws for short. These laws are concerned either with more or less concrete things, events, signs, living beings, artefacts, social communities, etc., or with more or less abstract concepts, ideas, constructs, etc. These things which are subjected to law are commonly referred to as ‘objects’, but, for reasons to be explained later (1.6), I shall refer to them as ‘subjects’ - i.e., beings subjected to laws.
(b) The distinction of typicality and modality (1.4). I shall distinguish those subjects which are more or less concrete from those which are more or less abstract. This distinction is mirrored in the one between typical, special laws, which apply to a limited class of subjects, and modal, general laws, which hold for their mutual relations. The first distinction (law and subject) is frequently identified with the distinction of universals and individuals. However, this identification is inadequate and too crude, since the distinction of typical and modal laws also implies a universal-individual duality. For the same reason, laws cannot be identified with classes or sets, although special laws define classes. Modal laws, however, do not, and therefore cannot be found by generalization: they must be inferred by abstraction.
(c) The various modal aspects or relation frames (1.5). Various solutions to the problem of the general modes of experience have been presented. However, most of these attempt to solve the problem in terms of a single principle of explanation, or if that leads to difficulties, a dualism. This has led to a proliferation of ‘isms’ in philosophy and science: arithmeticism (Pythagorean tradition), geometricism (Descartes’ more geometrico), mechanism (Galileo, Descartes, Huygens, Leibniz, Kant, Maxwell), evolutionism, vitalism, behaviorism, logicism, intuitionism, historism, etc. In contrast to this trend, I shall attempt a solution in terms of several mutually irreducible modes of experience. Herman Dooyeweerd and Dirk Vollenhoven recognized that the modal laws can be grouped into several law spheres or modal aspects. Each modal aspect is equally general and universal, but is irreducible to any other. Part I of this book will be concerned primarily with only four relation frames, to be designated as the numerical, the spatial, the kinetic, and the physical aspect. The biotic and psychic aspects will be treated in part II, the normative aspects in part III.
These three basic distinctions are neither dependent on each other nor reducible to each other. They may be pictured as being mutually orthogonal, like the three axes in a Cartesian coordinate system. The three distinctions, though independent and irreducible, must be studied simultaneously, since they interpenetrate one another. It is not possible to discuss one of them without taking into account the other two. In the following sections I shall discuss these distinctions more extensively. During the discussion I shall point out several distinct aims of science which differ from one another to the extent that different viewpoints are possible within our systematic. I shall argue that each distinction implies a twofold direction of development. This suggests the dynamic development of science (1.7), for the systematic to be discussed will turn out to be dynamic, not static.
Time and again
1.3. Law and subject
The first basic distinction in this investigation of created reality is that of its law side and its subject side. In every philosophy, rightly called scientific, this distinction is explicitly or implicitly made. Without it, science would be impossible. The idea of natural law was developed in classical physics. Yet it is not merely a scientific, epistemological idea, but it is rooted in the creation itself. In fact, not only science, but all our life would be impossible without the awareness (mostly subconscious) of laws, distinct from subjects. In our lifetime we encounter things, animals, plants, men, human societies, organizations, and, above all, ourselves. All of these I refer to as subjects, i.e., they are all subjected to some kind of law. Indeed, it is precisely because of these structural laws that we can distinguish the various subjects from one another, and explain and predict their behaviour. Without having some intuitive idea of structural laws which hold for plants, animals, etc., it would not be possible even to speak of them. We would be unable to perform even the simplest acts of life if we had no idea of, and confidence in, lawfulness.
There are no subjects without modal or structural, general or specific laws. Every subject is constituted by some law, and is related to other subjects by laws. The reverse is also the case. There are no laws without subjects (either possible or actual). The function of a law is to be valid for some or all subjects. These two sides of reality are correlative. As a result, we must avoid both rationalism, which overrates the law side of creation, and irrationalism, which over-emphasizes the subject side of reality. As sides of reality, both law and subject display the self-insufficiency of the creation. Via the law, subjects receive their meaning by pointing to their origin, the Divine creator of heaven and earth. Isolated from this relation, subjects lose their creational meaning. The other direction indicates that God maintains his creation via the laws. Lawlessness implies not only loss of meaning, but also self-destruction into nothingness.
Both the distinction of law and subject and their relation is an ontological matter. Since our pre-scientific knowledge of laws is primarily intuitive, the first aim of science is to render these laws explicit, i.e. to explicate them. The laws are implicitly present in reality. We have no a priori knowledge of the laws. Therefore our knowledge of laws, whether implicit or explicit, is both empirical and tentative. It is important to distinguish laws in an ontological sense from our hypothetical law statements in scientific formulations. These statements are also frequently referred to as ‘laws’. Laws and subjects have an ontic nature, whereas theories, models and facts have an epistemic nature.[16] Thus, while electrons have always existed and the laws concerning them have always been valid, an electronic theory and the fact of the existence of electrons did not appear until 1896. Prior to that year the existence of an electron was not a fact. Theories, hypotheses, models and facts, though bound to the creation order, are human inventions. But laws and sub-human subjects exist independently of human knowledge.
Laws can be discovered, e.g., by induction, because they are related to subjects, and the validity of law statements can be tested by confirmation with facts. This state of affairs, however, does not mean that laws can be reduced to subjectivity. This was most clearly recognized by David Hume, who argued that the ‘inductive assumption’ concerning the possibility of finding laws by induction cannot be justified by experience. Hume insisted that there is no epistemic proof that laws concerning future events can be inferred from regularities observed in the past.[17] This discovery resulted, on the one hand, in a sceptical attitude concerning the very possibility of science and, on the other, in the conviction that laws have a merely epistemic status. I share neither of these views. For me, the possibility of discovering laws is based on faith in the lawfulness of reality and in a God who faithfully maintains his laws. Admittedly, the lawfulness of reality cannot be proved. It is an a priori of all human experience, including scientific experience.[18] According to the philosophy of the cosmonomic idea, different philosophical systems can be characterized according to their respective views on the status of law.[19] Thus the name of this philosophy does not lay a claim on the cosmonomic idea. Rather it pleads for the recognition that any philosophical system must account for the lawfulness of reality. Such an account does not have a scientific but a religious starting point, having scientific consequences.
The positivist view that the truth of law statements can be established by verification of their factual consequences has been criticized by Karl Popper.[20] However, Popper’s falsifiability criterion, though a correction to the positivist view, is only sufficient to demarcate scientific from non-scientific law statements. Regardless of how much evidence may corroborate a natural law statement, acceptance of the statement as a law is always a matter of faith. A law statement is ultimately believed to be true, because of convincing evidence supporting it. This belief does not prove that the law statement is true, for such proof does not exist. This belief is neither individual nor irrational; it is communal, i.e., the community of scientists decides on the faithfulness of the empirical evidence and the acceptability of physical theories.[21] To perform this task the scientific community organizes societies, journals, etc., in which the evidence is judged and debated, according to unwritten codes. Even then, the truth of any law statement cannot be absolutely proved or disproved. Indeed, in many cases, scientific research is initiated because someone (on quite rational grounds) does not believe the accepted views on some particular subject.
Ultimately, an acceptance of the truth of law statements and empirical evidence is based on belief, both in the reliability of one’s colleagues, and in the lawfulness of the creation. Hence there is room for the rejection of formerly held law statements, and the critical reconsideration of older evidence in the light of new evidence or insights. This same state of affairs applies to the consideration of subjects. Because of the correlation of law and subject, knowledge of facts is always theory-laden. Thus, at present, it seems quite certain that electrons and stars are real existing entities, whereas one may be less sure about the existence of quarks and quasars. In these cases, too, the degree of certainty depends upon the availability of independent, reliable evidence. In my opinion, both laws and subjects are discovered, implying an active role for the scientific explorer. How theories are found or invented is not well understood. The scientist’s fantasy and genius itself is subjected to historical and psychological research, and certainly cannot be reduced to simple logical rules for deduction and induction.[22]
Though the number of laws may be infinite, they are not all independent, and it is often possible to deduce one law statement from others. In this case it is said that the former is reduced to the latter. The reduction of laws and, conversely, the deduction of new laws and their consequences for subjects is the second aim of science. Axiomatization can be a very helpful tool in investigating the possibility of such reduction schemes. Attempts to reduce all laws to a single principle have been made in every epoch of philosophy, beginning with Thales’ Everything is made of water.[23] In classical mechanism Galileo Galilei, René Descartes and Christiaan Huygens attempted to explain all physical phenomena from the motion of unchangeable pieces of matter.
Time and again
1.4. Typicality and modality
In addition to the distinction of law and subject, it is very fruitful to introduce a second basic distinction, that of typicality and modality. This distinguishes specific laws which are valid for a limited class of subjects (typical laws) from general laws valid for all kinds of subjects (modal laws). Typical laws, in principle, delineate the class of subjects to which they apply, describing their structures and typical properties. Examples of such laws are Coulomb’s law (applicable only to charged subjects), Pauli’s principle (applicable only to fermions), etc. Often the law describing the structure of a particular subject (e.g., the copper atom) can be reduced to some more general typical laws (e.g., the electromagnetic laws in quantum physics). On the other hand, general, modal laws are those which have a universal validity. For example, the law of gravitation applies to all physical subjects, regardless of their typical structure. We call these modal laws because, rather than circumscribing a certain class of subjects, they describe a mode of being, of experience, or of explanation. In particular, modal laws determine relations.
This distinction is also relevant to the way in which different laws are discovered and formulated. Whereas typical laws can usually be found by induction and generalization of empirical facts or lower-level law statements, modal laws are found by abstraction. Euclidean geometry, Galileo’s discovery of the laws of motion and the subsequent development of classical mechanics, and thermodynamic laws are all examples of laws found by abstraction. This state of affairs is reflected in the use of the term rational mechanics, in distinction from experimental physics.
At first sight the distinction between typicality and modality appears to apply only to laws. Indeed, all concretely existing things, events, organisms, etc., have some typical structure. However, even as modal laws are found by abstraction, modal subjects, which are abstracted from any typical and individual properties, are also found to exist. The abstract modal subjects (so-called because they are exclusively subjected to modal laws) are indispensable in science for the ordering of our experience. Numbers, coordinate systems, inertial and isolated systems are all examples of modal subjects. They do not exist in any concrete sense, since they lack any individuality and typicality. Nevertheless, in the sense of belonging to created reality, these subjects are perfectly real – they are abstracted form concrete, individually existing things, events etc. Typical laws must be disentangled in order to discover modal laws. This process could not be carried out without the use of abstract modal subjects. Abstraction may be called the third aim of science, which includes the formulation of modal, universal laws, as well as the modal analysis of concrete reality on both the law side and the subject side.
The distinction of typicality and modality is, however, not merely an epistemological one, for though there is a plurality of laws and subjects, there is only one reality. This means that even though subjects may have widely differing typical structures, they must be related in a general way. It is these general (thus modal) subject-subject relations which come to the fore when we study modal laws (1.5). Therefore, the modal aspects may be aptly called relation frames. For instance, two physical subjects, regardless of their typical, individual structures, are always related, since they must have a certain spatial distance and a certain relative velocity. But in order to investigate these general relationships, the subjects must be deprived of their typicality - i.e., modal laws have correspondent modal subjects.
I shall define the character of an individual thing or event as a typical set of specific laws (1.5). Therefore, the fourth aim of science is the reconstruction or synthesis of typical laws occurring in characters for classes of things or events. Since modal laws are too universal to form any typical structure, the starting point for the reconstruction cannot be taken solely in the modal laws themselves. As it happens, in physics, in addition to purely modal gravitational interaction one must also consider electromagnetic interaction and two types of nuclear interaction. Despite many efforts toward the development of a unified field theory, these fundamental interactions cannot be reduced to one another (11.1). With the help of modal laws and these typical interactions, an enormous number of characters for typical structures may be recognized: nuclei, atoms, molecules, crystals, particles, quasi-particles, etc. (chapter 11). Investigations of these structures reveal both sides of the modality-typicality distinction: abstraction and reconstruction, analysis and synthesis. Without the existence of the irreducible fundamental typical interactions, typical laws could be subsumed under modal laws. Because of their irreducibility, the distinction of typicality and modality must be recognized as being orthogonal to the distinction between law and subject. The study of typicality rests heavily on modality, which we shall discuss first, but also the investigation of modality requires insight into typicality.
This distinction of typicality and modality appears in several other philosophical systems in one form or another. Norman Campbell distinguishes typical laws from other laws. He calls typical laws
‘…laws of the kind which assert the properties of a kind of system ... The ‘classificatory’ sciences differ from other sciences in that they confine themselves to laws of this type…’[24]
Henry Margenau[25] speaks of the ‘immediately given’ from which a scientist passes to ‘orderly knowledge’ by the formation of ‘constructs’. Between the former and the latter there are ‘rules of correspondence’ and there is a ‘circuit of empirical confirmation’. Mario Bunge states ‘Every physical idea is expressed in some language and has a logical structure and a context of meaning.’[26] The language has a (modal) syntax or grammar and, via a semantics, is connected with reality. From my point of view, this may be recognized because the logical and sign aspects are universal, but one should avoid the pitfall of absolutizing them.
Time and again
1.5. Relation frames
The theory of the modal aspects or relation frames, as I prefer to call these, is one of the most important chapters in the philosophy of the cosmonomic idea.[27] Herman Dooyeweerd says:
‘ ... our theoretical thought is bound to the temporal horizon of human experience and moves within this horizon. Within the temporal order, this experience displays a great diversity of fundamental modal aspects, or modalities which in the first place are aspects of time itself. These aspects do not, as such, refer to a concrete what, i.e., to concrete things or events, but only to the how, i.e., the particular and fundamental mode, or manner, in which we experience them. Therefore we speak of the modal aspects of this experience to underline that they are only the fundamental modes of the latter. They should not be identified with the concrete phenomena of empirical reality, which function, in principle, in all of these aspects.’[28]
Because of the genetic nature of scientific knowledge the designation of the various modal aspects must always be tentative and hypothetical. Dooyeweerd himself did not distinguish the kinetic from the physical modal aspect until 1953. The distinction of two mutually irreducible modal aspects is based on an analysis of our contemporary knowledge. Part I reports on such an analysis for the first four modal aspects. As we shall see, this analysis sometimes has to rely on insights into specific characters, anticipating the much more extensive investigation in part II.
Principles of explanation
In science, the different modes of experience can be different modes of explanation as well. 17th-century physics distinguished four mutually irreducible principles of explanation: quantitative, spatial, kinetic and physical interaction. This provides us with a possible distinction of the special sciences on an ontological basis, at least insofar as a special science can be characterized by one of the irreducible modes of explanation. In principle, each modal aspect has a corresponding special science: arithmetic or algebra with the numerical aspect, geometry with the spatial aspect, kinematics with the kinetic aspect, physics (including chemistry and astronomy) with the physical relation frame, biology with the biotic aspect, etc.[29] This classification is not exhaustive, however, since some sciences (geology, for example), study certain structures from the viewpoint of several modal aspects, no single one of which takes a leading role.
Temporal relations
Temporal reality is a multiply-connected pattern of relations. Although many of these relations have a typical structure, it is only possible to understand the unity, i.e., the mutual relatedness of all subjects in the creation, if at least some of these relations are of a modal, universal nature. All concrete existing things, events, etc., have mutual numerical, spatial, kinetic, and physical relations. These mutual relations make it possible to become aware of and understand these subjects. We are, therefore, entitled to speak of the relation frames as universal modes of temporal relations.
Within this modal relatedness a law side may be distinguised from a subject side. On the law side, in each modal aspect, one finds a distinct modal order, which is correlated with a modal subject-subject relation on the subject side. In the numerical aspect the modal order is the serial order of smaller and larger, or earlier and later. This modal order originally correlates with the numerical difference or ratio of two numbers, as modal, numerical subject-subject relations. The modal order in the spatial modal aspect is that of simultaneous coexistence, which is correlated with the relative spatial position of two subjects on the subject side. In the kinetic modal aspect the modal order of uniform time flow is correlated with subjective relative motion, and in the physical aspect the modal order appears as irreversibility, which is correlated with the physical interaction of two or more subjects on the subject side.
The modal order in every relation frame refers to our common understanding of time, since earlier or later, simultaneity, the uniform flow of time, and irreversibility are all acknowledged temporal relations. At first sight, the same cannot be said of the modal subject-subject relations such as relative position and interaction. However, we shall see that on the subject side, the opened-up numerical subject-subject relations (anticipating other subject-subject relations) most closely approximate what we usually refer to as ‘time’. This is most clearly shown by an analysis of the historical development of time measurement, at least insofar as such a development can be reconstructed. Initially, time measurement was simply done by counting (days, months, years, etc.). Later, time was measured by the relative position of the sun or the stars in the sky, with or without the help of instruments such as a sundial. In still more advanced cultures, time was measured by utilizing the regular motion of more or less complicated clockworks. Finally, in most recent developments time is measured via irreversible processes, for example, in atomic clocks.
In a scientific context, however, it is inadequate to work with either a simple common notion of time, or a merely objective representation of subjective relations. All modal subject-subject relations as well as the modal orders to which they are subjected must be recognized as being temporal. Time relates all subjects to each other under a universal law of order. The question as to whether time is relational or absolute in some sense has long been debated and still has not been settled.[30] Since the 19th century, absolute time infers a unique universal reference system. I shall show that the theory of modal time requires the existence of several frames of reference systems - none of them unique, all of them universal – allowing of an objective description of our world.
Although the modal aspects are mutually irreducible, they are neither unconnected nor independent. The modal aspects display a serial order. As a result we can speak of earlier and later modal aspects in the sense that a later modal aspect presupposes the earlier ones. For example, the spatial modal aspect presupposes the numerical relation frame. If this were not so, it would not be possible to speak of three-dimensional space, the four sides of a square, or any other numerical attribute of spatial functioning. In a similar way, the spatial aspect is presupposed by the kinetic modal aspect, which in its turn, is presupposed by the physical aspect. Similarly, the biotic aspect presupposes the physical aspect, and so forth.
The later aspects refer back to, or retrocipate on the earlier ones. Thus each modal aspect, except for the numerical (first) aspect, contains retrocipations. It means that the subject-subject relations in one aspect can be projected on those in an earlier one. Indeed, the meaning of any modal aspect cannot be fully grasped without an insight into its retrocipations. Anticipations are the counterparts of retrocipations. Not only does each modal aspect (except the first) retrocipate on the earlier aspects, but each earlier aspect (except the last) anticipates the later ones.
Part I will only be concerned with the retrocipations and anticipations between the first four modal aspects. These anticipations and retrocipations project relations in one frame onto relations in another frame. In keeping with our distinction between the law side and the subject side of reality, we shall find these projections both on the law side and on the subject side of the creation. Thus the view that the modal aspects form a sort of layer structure in reality, with each layer built upon the earlier ones, is prohibited. Rather than being well separated departments of reality, the relation frames are intertwined, mutually irreducible, indispensable aspects of reality. The designation and distinction of relation frames and the exploration of their retrocipations and anticipations may be called the fifth aim of science.
The relevance of the relation frames for typicality
The distinction of the modal aspects is relevant, not only for modal laws and modal subject-subject relations, but also for typical relationships. A typical structural law may be viewed as a typical conglomerate of relevant modal and typical laws. Such a typical structural set of laws, which I shall call a character, has two limiting modal aspects, to be designated as the founding aspect, and the leading or qualifying frame of reference. For example, atoms, stones, and stars, called ‘physical things’ for short, are qualified by the physical modal aspect, whereas plants and fungi are qualified by the biotic aspect. On the other hand, the structure of an atom is founded in the spatial relation frame, since it consists of a nucleus surrounded by an appropriate number of electrons. In contrast, particles are founded in the numerical frame, since they are characterized only by typical magnitudes. This intricate state of affairs, to be discussed in greater detail in chapters 10 and 11, is further complicated by the fact that, within an atom, the nucleus, though itself spatially founded, functions as a particle. Such a relationship Dooyeweerd referred to as enkapsis: the structure of the nucleus is enkaptically bound within the structure of the atom. I prefer to say that the two stuctures are interlaced. In the same way, atoms are interlaced with the structure of a molecule, and molecules within the structure of a living cell. It means that, besides the primary qualifying and the secondary founding aspects, each character has a tertiary, anticipatory disposition to be interlaced with other characters. This disposition is highly responsible for the dynamic development of the natural world.
The empirical way to find the various relation frames
Hence the modal aspects are presented as mutually irreducible but connected modes of experience, modes of explanation, modes of order, and first of all modes of temporal relations. It should not be surprising to find that modes of experience and explanation are identified with modes of order and relation. In a broad sense, explanation means to order pieces of experience by relating them to other pieces of experience under a law. The relation frames should not be understood as self-evident a priori modes of thought laid bare by a metaphysics independent of empirical science. On the contrary, the arguments for the designation of the modal aspects will be found in science (understood as the empirical investigation of the creation), not in metaphysical speculation, based on a supposed autonomy of human thought.
Time and again
1.6. Subjects and objects
We have now covered enough ground to justify the use of the word subjects to designate things which, perhaps, are more commonly referred to as objects. In fact, the linguistic use of these words is more original than the modern scientific and philosophical practice.
For example, consider the following question: Is it possible to speak of modal, universal, biotic laws which are valid for all kinds of subjects, regardless of their typical structure? Initially, it would seem that a stone is not subject to biotic laws. In order to answer this question adequately it is helpful to distinguish between subjects and objects. In the philosophy of the cosmonomic idea, subjects are actively or directly subjected to a certain law, whereas objects, in contrast, are related to the law only passively or mediately. This implies that objects receive their creational meaning from the subject to which they are related by a subject-object relation. Thus a stone cannot be a biotic subject. Only living organisms can be subjects to biotic laws. But atoms and molecules, rocks and sticks, may function as biotic objects within the sphere of some biotic law. For example, a bird’s nest, as a subject, is subjected to only mathematical and physical laws. As a bird’s nest, however, it can be understood adequately only as a biotic object; the nest has an objective biotic qualifying aspect. The bird’s nest receives its true objective biotic meaning through its relation to a bird, which is a biotic subject.[31]
The distinction of subject and object is not limited to typical structures of reality. Subjects and objects also appear on the modal side of reality. The path of a moving subject is a kinetic modal object since the path itself is motionless; and the state of a physical subject is a physical modal object since states do not interact.
Subjects and objects in epistemology
It is also possible to speak of subjects and objects in an epistemological context. In this case, however, only humans can be subjects, since things, events, plants, and animals always remain objects of scientific or common thought. The latter can only function as subjects in an ontological context. As observed above, during the first half of the 20th century epistemology has taken priority over ontology in the dominant western philosophies. Since the Renaissance the ground motive of western thought has been the relation of freedom and nature – i.e., the relation of human thought and activity, and its natural object.[32] In developments of the past four or five centuries, the natural subjects have become increasingly objectified. Whereas they retained an independent existence, determined by their spatial extension or mechanical interaction in the philosophy of René Descartes and Gottfried Leibniz, natural subjects were denatured, in principle, to unknown Dinge an sich in Immanuel Kant’s thought. In modern positivistic and phenomenalistic thought they became mere appearances. Occasionally existentialistic circles have tried to restore nature in a purely individual relation of humans and their environment. Paralleling this development, natural laws were reduced to mere epistemic ordering principles, whether a priori and unavoidable (Kant), merely economic (Ernst Mach), or conventional (Henri Poincaré).
These developments are reflected in modern terminology. Today one generally speaks of natural objects, even when their subjectivity to natural law is discussed. The modern view is strongly oriented towards a completely functionalistic view of reality, in which the modal aspects considered as universal modes of thought are the dominant principles of explanation. In this respect, post-Renaissance philosophy differs sharply from Greek and medieval philosophies, which were usually dominated by a typicalistic view, most clearly exemplified in Aristotle’s form-matter scheme.[33]
For Christian philosophy there is no need to absolutize any modal aspect, or any typical structure or relationship. At its foundation lies the acknowledgement that the creation is not independent of its Creator. On the one hand, there is no substance which exists independently of law, and, on the other hand, all natural subjects exist as creatures (being and becoming) under the laws. Because they are all subjected to laws, all subjects point to the Lawgiver. Herman Dooyeweerd’s dictum ‘meaning is the mode of being of all that is created,’[34] implies that natural subjects acquire their full meaning only if, in addition to their subject functions, all of their object functions are also opened up in their relation to humankind. In this relation natural subjects receive their full religious meaning since, in his relation to God, humanity is the religious centre of the creation.
The distinction of subject and object enables us to achieve a clear insight into the terms objectification and objectivity. In humanistic thought everything which relates to sub-human subjects is referred to as objective. As a result, the demand for an objective science has acquired an entirely confused meaning. It is sometimes understood as being intersubjective or public. In this case one distinguishes between individual (subjective) experience and public (objective) experience.[35] In other contexts objectivity is identified with universal validity or law conformity. In the philosophy of the cosmonomic idea, the meaning of the word objective is different: objectivity means a representation of modal and typical states of affairs referring back to earlier modal aspects. Objectification is made possible by the existence of retrocipations on these earlier aspects, and of developing the latter’s anticipations. The problem of objectification, which may be termed the sixth aim of science, shall occupy much of our attention. Spatial points, which refer back to the numerical modal aspect, enable us to find an objective numerical representation of spatial magnitudes and relative positions (chapter 2). The path of motion, referring back to the spatial modal aspect, provides us with an objective representation of the motion of a kinetic subject (chapter 4). Similarly, the state of a physical system allows us to objectify the system’s interaction with other systems (chapter 5).
For physics, objectification means a representation of physical states of affairs in mathematical terms, in particular the projection of physical relations on kinetic, spatial or quantitative ones. It became an important tool in the dynamic development of physics. It is frequently said that mathematics is the language of physics, as if it were a merely linguistic matter. The real state of affairs is more complicated than this metaphor suggests. The modal aspects which precede the physical aspect and form the subject matter of mathematics, are universal aspects of the full creation, including physically qualified things and events. It is impossible to account for physical functioning without including the earlier relation frames in one’s analysis.
Time and again
1.7. Dynamic development
of the relation frames
Dooyeweerd called the development of anticipations the opening process.[36] Including the development of retrocipations, I shall discuss this historical process in the present section. In part II the dynamic development of typical structures will be treated.
Several scholars in the history of science have pointed toward this process. Specifically, they reject the view that
‘ ... scientists are men who, successfully or not, have striven to contribute one or another element to that particular constellation (of facts, theories, and methods collected in current texts) ...‘, such that ’... scientific development becomes the piecemeal process by which these items have been added, singly and in combination, to the ever growing stockpile that constitutes scientific technique and knowledge’.[37]
In his The structure of scientific revolutions (1962), Thomas Kuhn introduced the distinction between normal science, which is guided by some time-honoured paradigm, and scientific revolutions, during which one paradigm is replaced by a new one.[38] Prior to the introduction and acceptance of any paradigm,
‘ ... the early developmental stages of most sciences have been characterized by continual competition between a number of distinct views of nature ... What differentiated these various schools was ... their incommensurable ways of seeing the world and of practicing science in it …’[39]
After a communis opinio is established ‘ ... on the assumption that the scientific community knows what the world is like ...’[40] normal science proceeds as ‘ ... a strenuous and devoted attempt to force nature into the conceptual boxes supplied by professional education’.[41] Eventually, in the course of normal science, anomalies, which cannot be understood within the existing framework, appear and
‘ ... then begin the extraordinary investigations that lead the profession at last to a new set of commitments, a new basis for the practice of science.’[42]
Gerald Holton’s Thematic origins of science also points to the difficulty with which new ideas are accepted. Referring to Albert Einstein’s principle of relativity, he observes:
‘... it is precisely such non-verifiable and non-falsifiable (and not even quite arbitrary) thematic hypotheses which are most difficult to advance or to accept. It is they which are at the heart of major changes or disputes, and whose growth, reign and decay are much neglected indicators of the most significant developments in the history of science.’[43]
I wonder whether Kuhn would call paradigmatic the following themes mentioned by Holton: conservation (of mass, energy, etc.), mechanism,
‘ ... macrocosmos-microcosmos correspondence, inherent principles, teleological drives, action at a distance, space filling media, organismic interpretations, hidden mechanisms, or absolutes of time, space, and simultaneity’, ‘…the efficacy of geometry, the conscious and unconscious preoccupation with symmetries.’[44]
For Kuhn,
‘A paradigm ... is in the first place, a fundamental scientific achievement and one which includes both a theory and some exemplary applications to the results of experiment and observation. More important, it is an open-ended achievement, one which leaves all sorts of research still to be done. And, finally, it is an accepted achievement in the sense that it is received by a group whose members no longer try to rival it or to create new alternatives to it. Instead, they attempt to exploit and extend it in a variety of ways…’[45]
Holton’s themes are more or less orthogonal to the
‘contingent plane’ of ‘propositions concerning empirical matters of fact (which ultimately boil down to meter readings) and propositions concerning logic and mathematics (which ultimately boil down to tautologies).’[46] ‘A thematic position or methodological theme is a guiding theme in the pursuit of scientific work, such as the preference for seeking to express the laws of physics whenever possible in terms of constancies, or extrema (maxima or minima), or impotency (‘It is impossible that … ‘)’[47]
Holton also distinguishes thematic components of concepts such as force or inertia, and thematic propositions or thematic hypotheses, containing one or more thematic concepts, and which may be a product of a methodological theme.[48] As a result, Holton’s themes appear to be more persistent than Kuhn’s paradigms:
‘Only occasionally (as in the case of Niels Bohr) does it seem necessary to introduce a qualitatively new theme into science’.[49]
Paul Feyerabend goes even further. Whereas both Kuhn and Holton accept the historical fact of the existence of paradigms, themes, and normal science, Feyerabend insists that the latter is dogmatic, since it clings to a single paradigm.[50] He pleads for open-mindedness, for competing views. It appears, at least from a Kuhnian perspective, that he wishes to return to the pre-paradigm period of science.[51]
Feyerabend strongly attacks the ‘restrictive conditions’ of consistency and meaning invariance, present in positivist empiricism:
‘Only such theories are then admissible in a given domain which either contain the theories already used in this domain, or which are at least consistent with them inside the domain; and meanings will have to be invariant with respect to scientific progress; that is, all future theories will have to be framed in such a manner that their use in explanations does not affect what is said by the theories, or factual reports to be explained.’[52]
Insofar as it is assumed that sense data are independent of theories, and that the accumulation of new data cannot give rise to a change in meaning of older theories, meaning invariance is a leading motive in positivism. Criticism of this view by Kuhn, Holton, and Feyerabend is based on historical grounds. These writers give many examples which show that any change of paradigm implies a change in meaning, also with respect to observational facts.
Deepening and relativizing
According to the philosophy of the cosmonomic idea meaning is determined by the relation of law and subject. Everything created has dependent meaning, as a result of being subjected to laws by its Creator. However, this does not imply another kind of meaning invariance. Indeed, it is precisely in the dynamic development process that meaning is both deepened and relativized. From this perspective, one could paraphrase Kuhn’s theory as follows: In the pre-paradigm phase, scientists are not yet aware of the meaning of their concepts. With the formation of the first paradigms, it is mainly the retrocipatory analogies of the modal aspects or typical structures that are discovered (this includes the search for objectivity, 1.6). Paradigm change is brought about by the discovery of either a new retrocipatory projection acting as a pushing force, or, even more spectacularly, by the discovery of an anticipation acting as a pull, an attractive force. Such developments account for the appearance of Kuhn’s scientific revolutions as well as Holton’s more persistent themes. With the development of a modal aspect, the latter remains in existence, as a fundamental and irreducible mode of explanation, though it may be viewed in a different light. Thus, whether Euclidean or non-Euclidean geometries are used, the aim of geometry remains to account for spatial relations. I shall discuss several examples of this model in chapters to come.
Does meaning change if it is developed, and, if so, to what extent does it change and to what extent does it remain invariant? The opening process adds anticipatory projections to a modal aspect, but it simultaneously influences the nuclear meaning of the aspect, together with its retrocipatory projections. This dynamic process I refer to as deepening and relativizing the original meaning of a modal aspect, since in this way the aspect becomes related to later modal aspects. This position is more complicated than either meaning invariance or meaning relativism. It involves both the law side and the subject side of reality.
The concept of mass
As an example, let us consider the concept of mass.[53] This concept was introduced first by Johann Kepler and Galileo Galilei, but became paradigmatic only with Isaac Newton. One of the properties of mass is its conservation in chemical reactions, only justified empirically after Newton’s time. This characteristic of mass is challenged in Albert Einstein’s theory of relativity (3.8). Now, one may ask whether the meaning of mass has undergone change or not? Positivists will reply that, since the factual content of the sense data related to mass has not changed, its meaning must remain invariant. Some operationalists will say that, as there are different experimental methods to determine mass, there are different meanings of mass which are independent of theory, and the meaning of mass will remain invariant with respect to change of theory. Feyerabend, among others, replies that any experimental method is theory laden and, hence, operational meanings are variable with theories. He states that, since mass is subject to different laws in Newtonian physics than in relativity physics, its meaning has also radically changed.[54] Still others point out that relativistic mass shares at least some of the properties of classical mass, such that some sort of family resemblance exists between the two.[55]
A view, commonly held in physics, is that Newtonian mechanics is a limiting case of relativity physics, since, at low velocities, the relativistic and Newtonian formulas become approximately equal. The relevance of this statement becomes clear only if we remember that experimental measurements always have a finite accuracy. Within given limits of accuracy, it is rather easy to determine the velocity below which it is impossible to distinguish Newtonian from relativistic results. A positivistic interpretation will say that, since in this case there is no difference between the two theories, the meaning of terms such as mass must also be the same. A realistic interpretation will insist that the meaning of mass is different in the two theories. I tend to reject both these views.
As will be argued in chapters 4 and 5, Newtonian physics is mainly retrocipatory, whereas both relativity physics and wave motion concern the kinematic development of the numerical and spatial modal aspects. This development also has a bearing on the numerical and spatial projections of the physical modal aspect, e.g. on mass. The meaning of mass in Newtonian physics can be understood as a numerical retrocipation in the physical modal aspect. In relativity physics, this retrocipation is also opened up, inasmuch as all numerical and spatial relations become frame-dependent. But this state of affairs implies neither a meaning invariance (since the meaning does change), nor a loss of meaning (since it remains a retrocipation in the physical aspect). Rather, the development of relativity physics results in a deepening and relativizing of the original closed meaning of mass in Newtonian physics. Relativizing does not result in a loss of meaning, especially since the retrocipatory viewpoint remains valid and useful. Indeed, there are so many instances where Newtonian mass is still relevant that it is illegitimate to characterize the Newtonian interpretation as approximately true, but formally false.
It should be clear by now that this theory of development does not lead to meaning relativism. Both in closed and in opened up form, meaning is bound to law. Scientists, who study laws and their relations to the subject side of reality, are similarly bound to law. We find, however, that in the opening process not only the subject side but also the law side is involved: that is why meaning is opened up, and why the meaning of a developed modal aspect or typical structure cannot be the same as the meaning of one that is still closed.
Time and again
1.8. Science and religion
Explicitly, I have presented the following aims of science: (1) the explication of laws, and (2) the reduction and deduction of laws (1.3); (3) abstraction or analysis, and (4) reconstruction or synthesis of typical laws (1.4); (5) the designation of modal aspects and the exploration of retrocipations and anticipations (1.5); (6) objectification (1.6). One could add: (7) the explanation of individual facts and phenomena.[56] These goals of science can be generalized by stating that the aim of science is the theoretical development of the full creation (1.7).
In addition to the theoretical opening process, many similar processes are operating within the creation. There is a natural opening process (the temporal evolutionary development of the cosmos, part II); individual ones (the growth, flowering and decay of a plant, or the opening up of the experiential horizon of an animal or human); technical development (the opening up of possibilities laid down in the creation); an artistic one, a social one, a linguistic one, etc. In each of these cases, it may be expected that the directions of retrocipation and anticipation as forces driving dynamic development will be retraceable.
The distinction of law and subject is itself directed. Subjects do not exist without laws, and via the laws they acquire meaning as creatures. The direction of subject-to-law points to the origin of creation, the sovereign Creator and Lawgiver, who Himself is subject to no law. As viewed from the subject side, the law is the boundary of created reality, across which no subject can step. For God, the law is not a boundary,[57] but, by maintaining His laws, according to His covenant, He remains faithful to His creation.[58] Thus the direction of law to subject expresses the dependence of the creation upon its Creator. The unfolding process becomes meaningful only because of this law-subject relation.
The latter statements are clearly not of a scientific nature, but point to an interesting and illuminating state of affairs, displaying both the similarity and the distinction of science and religion.[59] In both cases man, who is himself a subject, searches for truth, truth about reality and about himself. In both cases the attitude of humanity is directed toward the origin of creation, and, therefore, in both cases, their attention is directed toward the law side of reality. The distinction of the two cases lies in the fact that in his scientific attitude, people see the subject side reflected in the law side. As soon as scientists formulate a law (finds a law conformity), they must verify it (or falsify it) on the subject side. One may even go as far as Popper who says that no law statement should be called scientific unless it is potentially falsifiable.[60]
Humans, however, experience that this scientific attitude is not sufficient for finding the full truth about reality. Through science the origin of creation cannot be found: the law side as the boundary of reality cannot be penetrated. It is in his religious attitude that people seek to look beyond the laws. In this effort no principle of verification can help because any subject points to the law side and beyond for its full religious meaning. At this point human self-insufficiency becomes abundantly clear. Faithful knowledge about the origin of full reality requires revelation, the truth of which humans can only find in religion. However, as we have pointed out earlier, the scientific attitude also rests on faith. The fundamental hypothesis of all sciences - the hypothesis that reality is lawful - cannot be proved; it must be believed. If you don’t believe it, you cannot be a scientist.
If the sovereignty of God as Creator and Lawgiver is not recognized, the unity and origin of reality must be found somewhere within temporal reality itself. In western culture, it is always humans themselves who are assigned the task of locating this origin, and, not recognizing the true origin, they must seek their point of reference in either one or another of the modal aspects, or in one of the typical structures. Such selection of reference points has resulted in the formation of the various mutually irreconcilable schools of philosophy, each pretending to be able to explain everything according to a single principle. Alternatively, people may place their trust in power (economic or political), in the church, or in one of the arts.[61] Regardless of where the reference point is chosen, such a choice always leads to a dogmatic (and nonprovable) over-rating of the aspect or structure concerned. A balanced and dynamic view of reality can only be achieved if the dependent and self-insufficient being of creation, of which no aspect or typical structure is overestimated or neglected, is accepted.
[1] On positivism, see Frank 1941; Kolakowski 1966; Von Mises 1939; Popper 1974.
[2] Bunge 1967a, 1.
[3] Bunge 1967a,1, 2.
[4] Bunge 1967a,2.
[5] Bunge 1967a, 64; Bunge 1967c.
[6] Bunge 1967a,68, 69.
[7] See Seagal in: Henkin et al. (eds.), 341: ‘... no axiom system is secure if it does not treat a closed system.’
[8] Noll 1974.
[9] Whiteman 1967, 104, 105; cf. Bunge 1967a,66; 1967b, Chapter 9, especially p. 120.
[10] Gödel’s theorem concerning the consistency and the completeness of axiomatized theories also shows some limitations of this method; cf. Gödel 1962; Bunge 1967a,64.
[11] Cantore 1969, 5.
[12] Bunge 1967a, 44, 49,58, 287; Bunge 1967c.
[13] Fraenkel, Bar-Hillel 1958.
[14] These are contemporary philosophies. For an enumeration of eight mostly historical views on the relation of natural philosophy and science, see Beth 1948, Chapter3. See also Losee 1972.
[15] Dooyeweerd WdW, NC; Vollenhoven 1950, 2010; Tol, Bril (eds.) 1992. For an introduction to this philosophy, see Kalsbeek 1970; Hart 1984; Clouser 1991a; van Woudenberg 1992; Strauss 2009.
[16] Bunge 1959a, 245ff; Bunge 1967a, 44 Bunge 1959a, 249 defines laws as ‘... the immanent patterns of being and becoming ...’ law statements as ‘... the conceptual reconstructions ...’ of laws. The relation of law and subjects, and the status of theories, models, facts, induction, deduction, and reduction arc objects for epistemological researchapter See e.g. Hempel 1952, 1965, 1966; Nagel 1961; Popper 1959, 1963; Stegmüller 1969-1970.
[17] Hume 1739, 1748; Braithwaite 1953, chapter 9; Harris 1970, 39, 40; Kolakowski 1966, 42-59; Losee 1972, 101-106; Popper 1972, 1-31, 55-105; Russell 1946, 634-647. Dooyeweerd NC, I, 275ff observes that Hume’s scepticism has a methodological significance, intended to reinforce his psychological ideal of science.
[18] Even the existence of subjects outside ourselves cannot be proved, as was shown by solipsism; cf. Russell 1927, 27ff.
[19] See Dooyeweerd NC, I, 93ff. For a discussion of the status of Newton’s second law of motion (which could serve as an illustration of this assertion), see Hanson 1958.
[20] Popper 1959, chapter 1.
[21] The influence of communal belief on accepted theories has been emphasized by Kuhn 1962, 1970; see also Bunge 1967a, 70; Harris 1970; Ziman 1968, 1976.
[22] Kuhn 1962, chapter 2; Feyerabend 1975; Lakatos 1970; Holton 1973, chapter 3; Finocchiaro 1973.
[23] Russell 1946, 44, 61.
[24] Campbell 1921, 56, 57.
[25] Margenau 1950 , chapter 3-6.
[26] Bunge 1967a, 9; Jammer 1974, 10ff.
[27] The modal aspects were originally called ‘law spheres’ (‘wetskringen’ in Dutch). Since Stafleu 2002, I call these ‘relation frames’.
[28] Dooyeweerd 1960b, 6, 7; see also Dooyeweerd NC, I, 3; on the criterion of a modal aspect, see Dooyeweerd NC, II, chapter 1.
[29] The prevailing positivist view reverses the creational order by stating that the sciences must be classified according to their methods (cf. Margenau 1950, 46).
[30] Gale (ed.) 1967.
[31] Dooyeweerd NC, I, 42-43. Because of the distiction of subjects and objects, the term ‘subject side of reality’ should be understood as ‘subject-and-object side’, but for short I shall stick to the usual ‘subject side’.
[32] Dooyeweerd NC, I;1960b; for the subject-object relation in humanist philosophies, see. Dooyeweerd NC, II, 367 ff.
[33] Dooyeweerd NC, II, 12; Jaki 1966, chapter 1.
[34] Dooyeweerd NC, I, 4.
[35] See. e.g. Popper 1959, 44ff, but also Kant 1781, A 820, B 848; for Popper, objectivity of scientific statements lies in the fact that they can be intersubjectively tested, which implies that the described phenomena should be reproducible. See also Margenau, Park 1967, who enumerate the following ‘meanings of objectivity’: ontological existence (‘the objective reality behind perceptible things’); intersubjectivity; invariance of aspect (‘objectivity must be assigned to those properties which are, or can be made, invariant’); scientific verifiability (‘Constructs which satisfy the metaphysical requirements as well as the stringent rules of empirical confirmation are called verifacts, and verifacts are the carriers of objectivity in the domain of theory’). The ‘metaphysical requirements’, e.g. Ockham’s razor, economy of thought, logical fertility, simplicity, are discussed in Margenau 1950, chapter 5; 1960.
[36] In part III, chapter 15, I interpret this idea in a different sense than Dooyeweerd does, see Dooyeweerd NC, I, 29, II, 181ff. In part IV my views of the development process are applied to the 16th-19th-century history of physics.
[37] Kuhn 1962, 1, 2; see Agassi 1963; for an extensive discussion of Kuhn’s views, see Lakatos, Musgrave (eds.) 1970; Finocchiaro 1973.
[38] Kuhn 1962, 10, 23.
[39] Kuhn 1962, 4.
[40] Kuhn 1962, 5.
[41] Kuhn 1962, 5.
[42] Kuhn 1962, 6.
[43] Holton 1973, 190; see also Holton 1978, chapter 1.
[44] Holton 1973, 24, 25, 27.
[45] Kuhn 1963, 363; it is by no means easy to comprehend the meaning of Kuhn’s paradigms. Masterman 1970 says that Kuhn uses ‘paradigm’ in not less than twenty-one senses.
[46] Holton 1973, 21.
[47] Holton 1973, 28.
[48] Holton 1973, 28.
[49] Holton 1973, 29; also Holton 1973, 61ff
[50] Feyerabend 1965, 172: ‘Normal science, extended over a considerable time, now assumes the character of stagnation, a lack of new ideas; it seems to become a starting point for dogmatism and metaphysics. Crises, on the other hand, are now not accidental disturbances of a desirable peace; they are periods where science is at its best, exhibiting as they do the methods of progressing through the consideration of alternatives’. See also Popper 1970 and Watkins 1970. Contrary to this, Kuhn 1963, 364 states: ‘Advance from paradigm to paradigm rather than through the continuing competition between recognized classics may be a functional as well as a factual characteristic of mature scientific development’.
[51] Feyerabend 1965, 320, 321: ‘You can be a good empiricist only if you are prepared to work with many alternative theories rather than with a single point of view and ‘experience’. This plurality of theories must not be regarded as a preliminary state of knowledge which will at some time in the future be replaced by the One True Theory’. See also Feyerabend 1975.
[52] Feyerabend 1965, 164; 1970, 323; the latter text reads ‘phrased’ instead of ‘framed’. See also Bohr 1949, 209, 210.
[53] Feyerabend 1970, 325ff; Kuhn 1962, 98ff; Hesse 1974, 64ff.
[54] Feyerabend 1965, 169: ‘That the relativistic concept and the classical concept of mass are very different indeed becomes clear if we also consider that the former is a relation, involving relative velocities between an object and a coordinate system, whereas the latter is a property of the object itself and independent of its behaviour in coordinate systems... . The attempt to identify the classical mass with the relativistic rest mass is of no avail either, for although both may have the same numerical value, they cannot be represented by the same concept’. For a similar viewpoint, see Kuhn 1962, 101, 102.
[55] Kuhn 1962, 45; Hesse 1974, 46-48, 64-65; Hesse observes that the classical and relativistic theories could not even be compared if key concepts like mass had completely different meanings in the two theories.
[56] Popper 1972.
[57] Dooyeweerd NC, I, 99.
[58] Dooyeweerd NC, I, 93.
[59] Dooyeweerd NC, I, 57: By religion is understood ‘ ... the innate impulse of human selfhood to direct itself toward the true or toward a pretended absolute Origin of all temporal diversity of meaning, which it finds focused concentrically in itself. This description is indubitably a theoretical and philosophical one, because in philosophical reflection an account it required of the meanings of the word ‘religion’ in our argument’.
[60] Popper 1959, 41; see also Lakatos 1970. A similar view was already expressed by Claude Bernard in the 19th century, see Kolakowski 1966, 93. The distinction of falsification and verification reflects the law-subject relation. Scientific law-statements (or ‘all-statements’) should be falsifiable, whereas subjective existential statements (of the form ‘there is a ... ‘) should be verifiable in order to qualify as empirically meaningful. See Popper 1959, 70.
[61] Or in astrology, superstition, myths, etc. That these convictions cannot be ruled out by their supposed lack of empirical support has been shown by Feyerabend 1965; cf.Kuhn 1962, 2.
Chapter 2
Number and space
2.1. Set theory and the first two relation frames
2.2. Numerical relations and the theory of groups
2.3. The dynamic development of the numerical relation frame
2.4. Vectors
2.5. The spatial relation frame
2.6. Spatial subject-object relations
2.7. Spatial subject-subject relations
2.8. Objectivity in the choice of coordinate systems
2.9. The dynamic development of the spatial relation frame
Time and again
2.1. Set theory and
the first two relation frames
Time and again is mostly concerned with an analysis of the foundations of physics. Such an analysis would be quite impossible, however, without taking into account the quantitative and spatial relation frames. In chapter 2 we shall discuss these, though not as extensively as our discussion of the kinetic and the physical aspects in subsequent chapters. This chapter should not be taken out of the context of this book. My only intention is to investigate the quantitative and the spatial modal aspects insofar as they are relevant to physics. The mutual irreducibility of these aspects will be discussed later on. In the present section I shall give a provisional outline of their meaning, and discuss their relation to set theory. The reader should keep in mind the mutual orthogonality of the distinction of law and subject, and that of the various relation frames.
The concept of a set
Plato and Aristotle introduced the traditional view that mathematics is concerned with numbers and with space. Since the end of the 19th century, many people thought that the theory of sets would provide mathematics with its foundations.[1] Since the middle of the 20th century, the emphasis is more on structures and relations.[2]
At first sight, the concept of a set is rather trivial, in particular if the number of elements is finite. Then the set is denumerable and countable; we can number and count the elements. It becomes more intricate if the number of elements is not finite yet denumerable (e.g., the set of integers), or infinite and non-denumerable (e.g., the set of real numbers).The numerical modal aspect of discrete quantity, as a universal mode of being, presupposes that every created thing is a unity, and that there exists a multitude of such unities. The numerical modal aspect is universal since there is nothing in the creation which is not subjected to numerical order. This order can be described as the order of before and after, both in its original meaning of more and less, and in its analogical meaning of smaller and larger in magnitude.
The spatial modal aspect of continuous extension explains why a unique ordering of everything created is impossible solely with the numerical order of before and after. Thus different sets may have the same number of elements, and different things may have the same size. The spatial order of simultaneous coexistence (on the law side of the spatial modal aspect) makes possible the original spatial relation of relative position (on the subject side of the spatial modal aspect). This spatial modal order also involves the analogical concept of equivalence with respect to some property, thereby allowing things to share this property in different degrees. The spatial modal order is only universal if it is considered together with the numerical order. Although the order of simultaneity does not apply to everything created, one can account for all static relations if this order and the order of before and after are taken together.
Set theory is nowadays generally considered to be the basis of the theory of number. Later, in chapter 8, I shall discuss the concept of probability and argue that it refers to the law-subject relation for individuality structures. Since the theories of probability and sets are closely related, I view the idea of a set as giving expression to the law-subject relation. Sets are always determined by some law. This is even the case with examples like ‘the set of all books in my room’, for this refers to the law defining ‘books’. In this context the set of all things on my desk is ill defined without further specification of a ‘thing’. In general, classes are not identifiable, or even imaginable, unless they are defined by a set of laws, and these laws are usually not of a mathematical kind. It is not strictly correct to say that a set is determined by a law. I prefer to say that a set has a law side and a subject side. The idea of a set cannot be reduced, either to the law side, or to the subject side.
Numbers and sets
The concept of number cannot be studied without the idea of sets. Both Baruch de Spinoza and Gottlob Frege observed that one cannot ascribe a number to things, unless these are grasped under a genus.[3] If in the realm of concrete things and events the post-numerical modal aspects are ignored, there is still the possibility of taking some of them together in a collection. After this process of abstraction, all that remains to be said is that concrete things belong to classes of things. The common property of all finite collections is that they can be counted, regardless of the spatial, kinetic, physical, etc., properties of their elements. Thus all finite collections are related either directly by a one-to-one correspondence, or by a one-to-one correspondence between one collection and a proper sub-set of another one. In the latter case the first collection is called smaller than the second one. Because this property is universal, one can now abstract from concrete sets, discovering an abstract and unique collection of natural numbers, serving as a universal reference system for all finite collections.[4]
On the other hand one cannot talk about a set without having a previous idea of a plurality of concrete things and events,[5] nor can one dispense with the individual unity of its members. Aristotle considered the individuality of things as their only property relevant to arithmetic. For Aristotle individuality meant the identity of a thing with itself and its being distinct from other things. Arithmetic had to abstract from all other properties of real things.[6] For example, the universal law of addition demands that if a collection of m members is added to a collection of n members, one always arrives at a collection of (m+n) members, whatever the character of the two collections, provided they have no member in common. This implies that each member has its own subjective identity.
Space and sets
The concept of space cannot be studied without the idea of sets either. A spatial figure is characterized by being connected and having parts. At the same time we have to consider it as an uncountable set of points, though we cannot define it as such. The fact that we can consider each spatial figure as a collection of connected and nevertheless disjoint parts is the necessary basis for the introduction of spatial magnitude.
On the other hand, the idea of a set always has a spatial aspect. In a set we have a number of coexisting members. Members can simultaneously belong to different sets. The notion of sub-sets of a set refers to the simultaneous existence of a whole and its parts. Also the concepts of ‘union’ and ‘intersection’ of sets refer clearly to the spatial modal aspect. In order to make the transition of all finite collections to the set of natural numbers, one often makes use of the concept of ‘equivalence class’. The numerical order of more and less is not directly applicable to sets, but only to equivalence classes of sets, each equivalence class uniting all sets with the same number of elements. This also shows that the spatial as well as the numerical orders are presupposed in this attempt to base a theory of numbers on set theory.[7] In fact, even if we talk about the set of natural numbers, we already refer to simultaneity.
Without the introduction of numerical and spatial orders, the sub-set of a set can only be partially ordered. In order to arrive at a universal order of sets, we have to introduce the more abstract orders of seriality and spatial simultaneity. For Aristotle, the number of a set was a concrete property. Frege was one of the first to recognize the abstract character of the cardinal numbers: there is only one number six, regardless of how many sixtuples of concrete things exist.[8] Even Russell’s definition of the number of a class as ‘the class of all classes which are equivalent to that class’[9] presupposes the abstraction of all properties of sixtuples, except of being classes, and having six members. It especially presupposes the abstraction from the spatial order of simultaneity, for in this case, one abstracts from the fact that so many sextuples exist simultaneously.
It is not my intention to investigate the foundations of set theory. The above arguments only serve to make clear the mutual orthogonality of the law-subject distinction, which finds its mathematical expression in the theory of sets, and the distinction of the various modal aspects, which we intend to study in this and the subsequent chapters.
Time and again
2.2. Numerical relations
and the theory of groups
The numbers form an abstract reference system for any serial order. Having no concrete existence, their meaning is purely modal. They are numerical modal subjects, being subject to numerical modal laws only. The different number systems which are relevant to physics will be investigated briefly: the natural, integral, rational, real, and complex numbers, as well as vectors. This will be done in a quasi-formal way, using a group-theoretic approach, because of the relevance of group-theory to present-day physics, and to our analysis of it. As will be seen later (10.5), groups are typical structures with a numerical character, to be used as instruments in the analysis of the numerical, spatial and kinetic relation frames.
Natural numbers
On the law side of the numerical relation frame time expresses itself as the serial order of before and after.[10] The number 2 is earlier than the number 3, because the latter can be generated from the former by addition of the number 1.
On the subject side, the numerical difference is correlated to this temporal order. Obviously, the statement that some number is later than another one gives rise to the question: ‘How much later?’ Indeed, the numerical difference between two numbers is related to their temporal order of earlier and later: the difference is positive or negative depending on this order (if a>b, then ab>0, etc.).
This serial order forms the basis of Giuseppe Peano’s axioms formulating the laws for the sequence N of the natural numbers.[11] The axioms apply the concepts of sequence, successor and first number, but do not apply the concept of equivalence. According to Peano, the concept of a successor is characteristic for the natural numbers:
1. N contains a natural number, indicated by 0.[12]
2. Each natural number a is uniquely joined by a natural number a+, the successor of a.[13]
5. If a subset M of N contains the element 0, and besides each element a its successor aas well, then M=N.[14]
The transitive relation ‘larger than’ is now applicable to the natural numbers.[15]
The character of the natural numbers expressed by Peano’s axioms is primarily quantitatively characterized. It has no secondary foundation for lack of a relation frame preceding the quantitative one.[16] As a tertiary characteristic, the set of natural numbers has the disposition to expand itself into other sets of numbers.
The laws of addition, multiplication, and raising powers are derivable from Peano’s axioms.[17] The class of natural numbers is complete with respect to these operations.[18] If a and b are natural numbers, then a+b, a.b en ab are natural numbers as well. This does not always apply to subtraction, division or taking roots, and the laws for these inverse operations do not belong to the character of natural numbers.
The set of natural numbers is the oldest and best-known set of numbers. Yet it is still subject to active mathematical research, resulting in newly discovered regularities, making arithmetic an empirical science.[19] Some theorems relate to prime numbers. Euclid proved that the number of primes is unlimited. An arithmetical law says that each natural number is the product of a unique set of primes. Several other theorems concerning primes are proved or conjectured.[20]
The whole-part relation
It is very important to distinguish a set from its members. The relation of a set to its elements is a numerical law-subject relation, for a set is a number of elements. By contrast, the relation of a set to its subsets is a whole-part relation that can be projected on a spatial figure having parts. A subset is not an element of the set, not even a subset having only one element.[22] A set may be a member of another set. For instance, the numerical equivalence class [n] is a set of sets.[23] However, the set of all subsets of a given set A (the ‘power set of A’) should not be confused with the set A itself.[24]
Overlapping sets have one or more elements in common. The intersection AÇB of two sets is the set of all elements that A and B have in common. The empty set or zero set Æ is the intersection of two sets having no elements in common. Hence, there is only one zero set. It is a subset of all sets.[25] If a set is considered a subset of itself, each set has trivially two subsets. (An exception is the zero set, having only itself as a subset).
This measure does not deliver a numerical relation between a set and its elements. It is not a measure of the number of elements in the set. A measure is a quantitative relation between sets, e.g., between a set and its subsets. If two plane spatial figures do not overlap but have a boundary in common, the intersection of the two point sets is not zero, but its measure is zero. The area of the common boundary is zero. For a spatial set, only subsets having the same dimension as the set itself have a non-zero measure. Integral calculus is a means to determine the measure of a spatial figure, its length, area or volume.
The number 2 is natural, but it is an integer, a fraction, a real number and a complex number as well. Precisely formulated: the number 2 is an element of the sets of natural numbers, integers, fractions, real, and complex numbers. This leads to the conjecture that the character of natural numbers does not determine a class of things, but a class of relations. The meaning of a number depends on its relation to all other numbers and the disposition of numbers to generate other numbers.[27]
The natural numbers constitute a universal relation frame for all denumerable sets. Peano’s formulation characterizes the natural numbers by a sequence, that is a relation as well. The integers, the rational, real, and complex numbers are definable as relations as well. Therefore, it is not strange that the number 2 answers different types of relations. A quantitative character determines a set of numbers, and a number may belong to several sets.
Group theory
Since the addition of two numbers yields a number, and the difference between any two numbers is a number, some sets of numbers may form a group. In 1831 Évariste Galois introduced the concept of a group in mathematics as a set of elements satisfying the following four axioms.[28] A group is a collection of distinct elements A, B, C, … on which a combination procedure is defined, such that for any pair of elements A, B an element AB can be generated, according to the following rules:
(a) If A and B are elements of the group, then the combination AB is also an element.[29]
(b) (AB)C=A(BC)=ABC – the group operation is associative
(c) the group contains one element I, called the identity element, such that for each element A of the group, AI=IA=A.
(d) to each element A corresponds an inverse element A’, such that AA’=AA=I.
Here, the equality sign (=) must be understood as ‘is the same as’, ‘is equal to’, ‘cannot be distinguished from’, or ‘can always be substituted for’. There is no intrinsic way to distinguish the element AA’ from the element I, for instance. The extrinsic lingual distinction only accounts for the different possibilities of generating the same element.
These four rules form the generic character of a group (10.5). They do not fully determine a group, however. As to the law side, one has to specify the group operation, and as to the subject side, one has to indicate the members of the group, by stipulating some members as a set of generators. The other members are dynamically generated by application of the group operation. Several different groups (i.e., having different members, and eventually a different group operation) may have the same group structure. In that case the groups are called different isomorphic models or representations of the same group structure. An isomorphism consists on the subject side of a one-to-one correspondence between the members of the two groups, and on the law side of a parallelism between the respective group operations. If the members A, B, C in one group correspond with the members K, L, M in the other group, and if AB=C, then KL=M. Thus the law does not define its subjects: the subject side cannot be reduced to the law side. Isomorphism plays an important part in finding objective relations, e.g. by projecting physical relations on mathematical ones.
As a character, a group is qualified by the numerical relation frame. It has no foundation in a preceding frame (because there is no one), and it has the disposition of being applied in the numerical and later frames, in particular in the study of characters qualified by the physical, kinetic, and spatial modal aspects.
Groups may be finite or infinite. The smallest groups contain just one element – evidently the identity element. For example, the number 1 forms a multiplication group, and the number 0 an addition group. These two groups are even isomorphic. The number 1 and -1 also form a multiplication group, consisting of just two members. Finite groups are very important in the physics of typical structures, but infinite groups are more interesting for the extension of the set of natural numbers.
Negative and rational numbers as relations
The set of natural numbers does not form a group, though if addition is taken as the group operation, the natural numbers satisfy rules (a) and (b). But there are no inverse elements, which means that within the set of natural numbers, subtraction is not always defined. However, by including the number zero and the negative integers, one arrives at a group. The integers are generated as members of the smallest addition group, which includes among its members the natural numbers. The group operation is addition, the inverse of a positive integer is a negative integer, and vice versa. To show that one has to specify some members of the group, it should be observed that the addition group of integers is isomorphic to the addition group of even integers, of triples, etc. In this approach the positive integers are identified with the natural numbers.
Within the group structure, the element AB’ can be considered as expressing the intrinsic relation between two elements A and B (for short, I shall say that AB’ is the relation between A and B). The relation between two integers is their numerical difference. The reverse relation is BA’. The relation of an element to itself is AA’=I, the identity. Because AI=IA=A, the relation of an element to the identity element is identical with the element itself. Therefore, the numerical difference between two numbers, as the basic numerical subject-subject relation, is a numerical modal subject itself.[30]
Difference is not the only conceivable numerical relation. From addition we can derive the operation of multiplication of two natural numbers (as an abstraction of the repeated addition of equally numbered collections).[31] If we introduce multiplication as a group operation, we generate the positive rational numbers as the members of the smallest multiplication group, whose members include the natural numbers.[32] For the group of positive rational numbers the identity element is the number 1, the inverse of a rational number is a fraction, and the group relation is the ratio between two rational numbers. The set of all rational numbers (positive, negative, and zero) is then defined as the addition group, whose elements include the positive rational numbers.[33] It cannot be defined as a multiplication group, because the number 0 has no inverse for multiplication.[34]
For the introduction of the rational numbers two group operations are required. This leads to the idea of a field, another ‘algebra’. A field is a collection of subjects in which two operations are defined (e.g., addition and multiplication), each satisfying the same rules as for groups, except that the identity element for one operation has no inverse with respect to the second operation. The two operations are connected via the distributive law: (A+B)xC=(AxC)+(BxC). Examples are the fields of rational numbers, of real numbers, and of complex numbers. (There are finite fields as well.) They have the usual addition and multiplication as operations, whereas dividing by zero is not defined.
Discrete and dense sets
The group structure does not specify an order between the elements. The groups discussed so far can be ordered according to the law mentioned at the beginning of this section. A>B, if AB>0, where ‘larger than zero’ means ‘being positive’. A set is called discrete in a certain order, if in that order each element has just one successor. Every finite collection is discrete, and so are the sets of natural and integral numbers. In a series the natural numbers (acting as ‘ordinal numbers’) serve as indices. A set is called denumerable if its members can be put in such a series, i.e., if there is a one-to-one correspondence between the members of this set and the natural numbers. The order of this series is extrinsic, while given by the indices. An intrinsic numerical order is determined by the numerical values of the set’s members themselves.
Now consider the set of the rational numbers, which can be arranged in a series, as is shown in any textbook on number theory.[35] In this series, in which a member is not necessarily larger than all preceding members, the members are arranged in an extrinsic numerical order (of the indices). The rational numbers in their intrinsic numerical order of smaller and larger do not form a discrete series, but a dense set. This means, in any interval there is at least one rational number, and therefore, an infinitude of rational numbers in any interval, and there is no empty interval, however small.
With the concept of a dense set, the limit is reached of the closed numerical modal aspect. It is the starting point for the opening up of this aspect, anticipating later modal aspects, as will be seen presently.
Time and again
2.3. The dynamic development of the numerical relation frame
The rational numbers are denumerable, at least if put in a somewhat artificial order. The infinite sequence 1/1; 1/2,2/1; 1/3,2/3,3/1,3/2; 1/4,2/4,3/4,4/1,4/2,4/3; 1/5, … including all positive fractions is denumerable. In this order it has the cardinal number of aleph 0. However, this sequence is not ordered according to increasing magnitude.
In their natural (quantitative) order of increasing magnitude, the fractions lay close to each other, forming a dense set. This means that no rational number has a unique successor. Between each pair of rational numbers a and b there are infinitely many others.[36] In their natural order, rational numbers are not denumerable, although they are denumerable in a different order. Contrary to a finite set, whether an infinite set is countable may depend on the order of its elements.[37]
Only in the 19th century, the distinction between a dense and a continuous set became clear.[38] Before, continuity was often defined as infinite divisibility, not only of space. For ages, people have discussed about the question whether matter would be continuous or atomic. Could one go on dividing matter, or does it consist of indivisible atoms? They overlooked a third possibility, namely that matter would be dense.
By his famous diagonal method, Cantor proved in 1892 that the set of real numbers is not denumerable. Cantor indicated the infinite amount of real numbers by the cardinal number C. He posed the problem of whether C equals aleph 1, the transfinite number succeeding aleph 0. At the end of the 20th century, this problem was still unsolved.
A theorem states that each irrational number is the limit of an infinite sequence or series[39] of rational numbers, e.g., an infinite decimal fraction. This seems to prove that the set of real numbers can be reduced to the set of rational numbers, like the rational numbers are reducible to the natural ones, but that may be questioned. Any procedure to find these limits cannot be done in a countable way, not consecutively. This would only lead to a denumerable (even if infinite) amount of real numbers.[40] To arrive at the set of all real numbers requires a non-denumerable procedure. But then we would use a property of the real numbers (not shared by the rational numbers) to make this reduction possible. And this appears to result in circular reasoning.
Continuous sets
Consider a continuous line segment AB. We want to mark the position of each point by a number giving the distance to one of the ends.[42] These numbers include the set of infinite decimal fractions that Cantor proved to be non-denumerable. Hence, the set of points on AB is not denumerable. If we mark the point A by 0 and B by 1, each point of AB gets a number between 0 and 1. This is possible in many ways, but one of them is highly significant, because it uses the rational numbers to introduce a metric, assigning the number 0.5 to the point halfway between A and B, and analogously for each rational number between 0 and 1. (This is possible in a denumerable procedure). Now the real numbers between 0 and 1 are defined as numbers corresponding one-to-one to the points on AB. These include the rational numbers between 0 and 1, as well as numbers like p/4 and other limits of infinite sequences or series. The irrational numbers are surrounded by rational numbers (forming a dense set) providing the metric for the set of real numbers between 0 and 1.
The set of line segments on a straight line having a common end point is also a group. The group operation is the spatial addition of two line segments, the inverse is a line segment in the opposite direction, the identity element is a line segment of length zero, and the group relation is a line segment equal in length to that between the non-common terminal points of two line segments. In the present context, the notions of line segment, congruence, and spatial addition are irreducible concepts: they belong to the spatial modal aspect.
Now the real numbers are introduced as elements of the group (a) whose elements include the rational numbers; (b) which has arithmetical addition as its group operation; and (c) which is isomorphic to the former group of line segments. In order to make the one-to-one correspondence between the elements of the two groups definite, an arbitrary unit segment must be chosen. This shows that the set of real numbers is not identical with the set of all segments with one common end point. In contrast, the set of all points on a line does not form a group. The reference (a) to the rational numbers is necessary to give the reals the character of numbers. Condition (c) is not sufficient for this purpose. A set is called continuous if its elements correspond one-to-one to the points on a line segment.[43] There is no one-to-one correspondence possible between the elements of a denumerable group and those of a continuous group. A continuous set cannot be reduced to a denumerable one. The number of elements in a continuous group is always infinite. On the one hand, the continuity of the set of real numbers anticipates the continuity of the set of points on a line. On the other hand, it allows of the possibility to project spatial relations on the quantitative relation frame.
The introduction of the set of real numbers as an isomorphic copy of a spatial group already indicates that the meaning of the real numbers is not originally numerical. Their meaning anticipates the spatial modal aspect. This means that the concept of isomorphy is a mathematical expression of the philosophical idea of projection. In contrast, the negative integers and the rational numbers may be considered expressing modal numerical relations between natural numbers and among themselves, and thus as modal abstractions between discreet collections. So the modal meaning of negative and rational numbers remains completely within the closed numerical modal aspect of discrete quantity.
Because the set of rational numbers is dense, it contains Cauchy sequences: infinite sequences of elements An, given according to some law, such that for any positive number ε (however small) there is a number N, such that if n>N and m>N, then |AmAn|<ε.[44] It may be observed that the existence of this limit does not depend on an actual completed infinitude of the series as a totality: an infinite discrete set does not have a last member.
It may occur that the limit A of a Cauchy sequence is not a member of the set. There are Cauchy sequences of rational numbers whose limits are not rational numbers themselves. The set consisting of all Cauchy sequences of rational numbers is the set of all real numbers. The inclusion of these limits completes the dense set of rational numbers, making it a continuous set of real numbers.[45]
However, the real numbers cannot be defined in this way. For instance, it is already presupposed that the limit A of a Cauchy sequence of rational numbers is a number, because otherwise the numerical difference |AAn| would have no meaning.[46] However, for the same reason, it is objectionable to say that this limit is not a number. It is an assumption to state that the limits of Cauchy sequences of rational numbers are (real) numbers, and one has to show that this assumption is warranted.
The quantitative meaning of numbers
According to Dooyeweerd, rational and real numbers must be considered mere functions of numbers, the only original numbers being the natural numbers.[47] For a similar reason some mathematicians[48] introduced the integer and rational numbers as equivalence classes of differences or ratios between natural numbers. Thus the integer 2 is the equivalence class of all differences (2+b)–b, where b ranges over all natural numbers. In this view the positive integers should not be identified with the natural numbers, as I did, and, depending on the context, the symbol ‘2’ may stand for a natural number, an integer, a rational number, and eventually for a real or complex number. This view is understandable if one considers the numbers as logically definable. In my view, numbers are discovered and are modal subjects under a law. Therefore I have no difficulty in identifying the number 2 as being the same member in different sets.
I agree that the natural numbers are primitives, whereas the existence of rational and real numbers depends on the existence of natural numbers. Nevertheless it is meaningful to speak of numbers, also in the case of negative, rational and real numbers, as modal subjects to numerical laws. In order to see this, one has to recall that the mutual relationship of law and subject implies that there are no laws without subjects, or subjects without laws. It may be imagined that mankind first discovered certain subjects (e.g., the natural numbers) and some laws (the laws of addition and multiplication) to which these are subjected. Afterwards, other laws were found (subtraction, division) pertaining to the same subjects. But then one also discovered other subjects (negative and rational numbers) to the same laws. In my view there is no reason to call these newly discovered subjects mere functions of the already known primitive subjects.[49] The real numbers are also subjected to the same laws of addition, multiplication, subtraction, and division as the rational numbers are.[50] Thus these numerical predicates of infinite sets of rational numbers behave as subjects to numerical laws.
As observed, the meaning of the negative and rational numbers remains completely within the closed numerical modal aspect, because they denote numerical relations between discrete collections. The set of all real numbers turns out to be non-denumerable, i.e., it is impossible to find a one-to-one correspondence of this set with the set of natural numbers. The meaning of a non-denumerable set cannot be found in the closed numerical modal aspect. But this meaning is found with the discovery of the one-to-one correspondence between the set of all real numbers and the set of line segments introduced above. Hence, the meaning of the set of real numbers anticipates the spatial modal aspect. It requires the dynamic development of the numerical relation frame.
This is also the case with the meaning of individual real numbers. Real numbers objectify magnitudes, first of all spatial magnitudes: lengths, areas, volumes. It was the great discovery of the Pythagorean school, that the rational numbers are insufficient for the numerical objectification of spatial magnitudes. The diagonal in a unit square has a length of √2, and it can easily be shown that this is not a rational number. In order to represent such magnitudes, one needs the real numbers.[51]
Therefore the meaning of the real numbers anticipates the later modal aspects. The limit of an infinite series is never actualized, but in the retrocipatory direction, real numbers become actual magnitudes. The length of a line segment is an actual, real magnitude. When the numerical relation frame is developed into the quantitative one, it original meaning is deepened and relativized, from numerical to quantitative. The deepening means that not only discrete sets, but also magnitudes can be numerically ordered. With real numbers, non-numerical subjects can be ordered according to their magnitude without gaps or holes. This relativization of modal meaning entails the loss of the discrete or denumerable character of numbers which they have in the numerical relation frame.
Time and again
2.4. Vectors
The temporal order in the numerical relation frame is that of earlier and later, and two numbers are called equal if they have the same position in this order. Therefore only one number 2 should be allowed, whether understood as a natural number, an integer, a rational, or a real number. However, if the order of smaller and larger is applied to concrete subjects or collections, several subjects may be equivalent with respect to some property.
In that case there will be at least one other property with respect to which they will be different. In many cases it will be possible to order a set of subjects according to two or more independent properties. Thus there are series with two, three, or more indices. Discrete series can always be ordered in a single numerical order, but this is not always desirable. It might also be that two independent properties have a continuous spectrum, in which case a unequivocal single numerical order is impossible. This notion of independence anticipates the spatial order of simultaneity, and therefore discloses the numerical relation frame on the law side.
Magnitudes are non-numerical relations which can be objectified by real numbers. There are non-numerical relations which can only be ordered in a serial order of smaller and larger, if they are decomposed into components, which simultaneously determine these relations. This applies in the first place to spatial position, but also to force, velocity, or the physical state of a system. Such relations are not objectified by a single real number, but by a multiplet of real numbers, called a vector. The minimum amount of real numbers needed for an objectification of a property or relation is called the latter’s dimension. The corresponding vector has an equal number of independent components, which is, therefore, sometimes called the vector’s dimension.
By way of example, and because of their relevance to physics, the present section reviews the theories of vectors, of complex numbers, and of Hilbert space.
Number vectors
A number vector is defined as an n-tuple of n real numbers, written bold-faced as a=(a1,a2,a3,…,an) and being subjected to some well-known rules.[52] The vectors with the same number n of components form a group with vector addition as group operation, and the zero vector (0,0,0,…,0) as its identity element. The inverse of a vector is –a=(-1)a. It is easily verified that the set of real numbers is isomorphic to the set of one-component vectors. The independence of the components is not changed by addition.
Next the scalar product is defined, a functional of two vectors having the same number of components, as the real number a.b=a1b1+a2b2+…+anbn. Because the result is a number, not a vector, this product does not define a group. The norm |a| of a vector a is defined by |a|2=a.a=a12+a22+…+an2
Complex numbers
One may wonder whether there exists an operation analogous to multiplication that gives rise to a field of vectors. This is indeed the case with the two-component vectors called complex numbers, often written as a1+a2i=a1(1,0)+a2(0,1)=(a1,a2).
Here the vector (1,0) is identified with the real number 1, and the vector (0,1)=i is the so-called imaginary unit. The addition of complex numbers is defined above. We call a*=a1a2i the complex conjugate of a=a1+a2i. The complex conjugate of a ‘real number’ (a1,0) is identical with itself. The product of two complex numbers is defined as the complex number (a1,a2)(b1,b2)=(a1b1a2b2,a2b1+a1b2).
Together with the addition, this defines a field. The unit vector is (1,0), and the multiplicative inverse of a is a*/a.a*. We see that i2=-1, according to the popular definition of i.[53]
The solutions of many problems concerning functions of real numbers are only possible, or more easily obtained, if the latter are considered as vectors (a,0) – i.e., if we consider those functions as functions of complex numbers.[54] This shows that the full meaning of disclosed modal subjects (real numbers) becomes clear only if the law side is also opened up (by the introduction of vectors). Besides vectors, there are other structures, like tensors and matrices, in which each component has two or more indices. They anticipate more complicated spatial or non-spatial relations than vectors are capable of doing. With the introduction of real and complex numbers it is also possible to anticipate the kinetic and later modal aspects, as in integral and differential calculus.[55]
Hilbert space
The concept of a vector can be further developed into vectors with complex components and functions of real or complex variables. Quantum physics makes use of a so-called Hilbert space (chapter 9), which is not a space (there are no spatial subjects in it), but a set of complex functions, anticipating the spatial and later modal aspects.[56] Here it is not immediately necessary to define the scalar product (which can be different for different cases), if only the functions belonging to the set and the scalar product conform some quite general rules.[57]
The possibility of mapping a Hilbert space on a set of vectors means that all Hilbert spaces with the same value for m are isomorphic to each other.[58] This number m, the dimension of the set, may be finite (as assumed above), infinite, and even non-denumerable.
Time and again
2.5. The spatial relation frame
In 1899, David Hilbert formulated the foundations of projective geometry as relations between points, straight lines and planes, without defining these.[59] Gottlob Frege thought that Hilbert referred to known subjects, but Hilbert denied this. He was only concerned with the relations between things, leaving aside their nature. According to Paul Bernays, geometry is not concerned with the nature of things, but with ‘a system of conditions for what might be called a relational structure’.[60] Inevitably, structuralism influenced the later emphasis on structures.[61]
Topological, projective, and affine geometries are no more metric than the theory of graphs.[62] They deal with spatial relations without considering the quantitative relation frame. I shall not discuss these non-metric geometries. The 19th- and 20th-century views about metric spaces and mathematical structures turn out to be much more important to modern physics.
The metric of objective magnitudes
Science and technology prefer to define magnitudes that satisfy quantitative laws.[63] To make calculations with a spatial magnitude requires its projection on a suitable set of numbers (integral, rational, or real), such that spatial operations are isomorphic to arithmetical operations like addition or multiplication. This is only possible if a metric is available, a law to find magnitudes and their combinations.
The dynamic development of various metrics is not only indispensable for the natural sciences. If a metric system is available, cooperating governments or the scientific community may decide to prescribe a metric to become a norm, for the benefit of technology, traffic, and commerce.[64] Processing and communicating of experimental and theoretical results requires the use of a metric system.
Spatial points
Whether a point is a subject or an object depends on the nomic context, on the relevant laws. The relative position of the ends of a line segment determines in one context a subject-subject relation (to wit, the distance between two points), in another context a subject-object relation (the objective length of the segment). Likewise, the sides of a triangle, having length but not area, determine subjectively the triangle’s circumference, and objectively its area.
The sequence of numbers can be projected on a line, ordering its points numerically. To order all points on a line or line segment the natural, integral or even rational numbers are not sufficient. It requires the complete set of real numbers. The spatial order of equivalence or co-existence presents itself to full advantage only in a more-dimensional space. In a three-dimensional space, all points in a plane perpendicular to the x-axiscorrespond simultaneously to a single point on that axis. With respect to the numerical order on the x-axis, these points are equivalent. To lay down the position of a point completely requires several numbers (x,y,z,…) simultaneously, as many as the number of dimensions. Such an ordered set of numbers constitutes a number vector (2.4).
For the character of a spatial figure too, the number of dimensions is a dominant characteristic. The number of dimensions belongs to the laws constituting the character. A plane figure has length and width. A three-dimensional figure has length, width and height as mutually independent measures. The character of a two-dimensional figure like a triangle may be interlaced with the character of a three-dimensional figure like a tetrahedron. Hence, dimensionality leads to a hierarchy of spatial figures. The base of the hierarchy is formed by one-dimensional spatial vectors.
Numerical and spatial vectors
Euclidean and non-Euclidean metrics
In Euclidean geometry, the relative position of points is found with the help of a Cartesian coordinate system, allowing to represent each spatial point by a vector (x,y,z,…). Having two points characterized by the vectors (x1,y1,z1,…) and (x2,y2,z2,…), the difference vector (x1-x2,y1-y2,z1z2,…) characterizes the relative position of the two points. The distance of the two points is the norm d of this vector, determined by d2=(x1-x2)2+(y1-y2)2+(z1z2)2+
This expression is called the metric of Euclidean space. A metric is a law according to which a numerical value can be assigned to a non-numerical property or relation. The above formula is an objective representation of this law for the determination of lengths and distances in Euclidean space.
The metric depends on the symmetry of space. In an Euclidean space, Pythagoras’ law determines the metric.[69] Since the beginning of the 19th century, mathematics acknowledges non-Euclidean spaces as well.[70] (Long before, it was known that on a sphere the Euclidean metric is only applicable to distances small compared with the radius.) Preceded by Carl Friedrich Gauss, in 1854 Bernhard Riemann formulated the general metric for an infinitesimal small distance in a multidimensional space.[71]
For a non-Euclidean space, the coefficients in the metric depend on the position.[72] To calculate a finite displacement requires the application of integral calculus. The result depends on the choice of the path of integration. The distance between two points is the smallest value of these paths. On the surface of a sphere, the distance between two points corresponds to the path along a circle whose centre coincides with the centre of the sphere.
A non-Euclidean space is less symmetrical than an Euclidean one having the same number of dimensions. Motion as well as physical interaction may cause a break of symmetry in spatial relations.
Time and again
2.6. Spatial subject-object relations
The distinction of subjects and objects as made in the philosophy of the cosmonomic idea (1.6) can best be illustrated with respect to spatial objects and objective magnitudes. The proper parts of a spatial subject cannot have more or less dimensions than the subject itself. A two-dimensional subject can only have two-dimensional parts. Just as collections can only be added if they have no members in common, magnitudes of spatial subjects can only be added if they have no common parts. But they may have common boundaries, because the boundaries are not parts of the subject. A boundary of a spatial subject always has a lower dimension than the subject itself, and, therefore, its subjective extension (with respect to the magnitude of the subject) is zero (it has ‘measure’ zero). Spatial boundaries have an objective meaning within the spatial modal aspect. They delimit the objective magnitude of the subjects, and they allow the introduction of numerical ordering within the spatial aspect.
The simplest spatial objects are points, having zero spatial extension. Points have an important spatial meaning as boundaries of a line segment. Spatial points serve to determine its length, the objective magnitude of the line segment. Similarly, in a two-dimensional space, a line segment can only function objectively, as a boundary of a triangle, e.g., by determining its area, which is again an objective spatial magnitude referring back to the numerical modal aspect. In this way the spatial relation frame is the first aspect to have objects as well as subjects.[74]
It is of no use to define a line, a plane, or a space as a collection of points, lines, or planes, respectively.[75] Although a line contains a continuous, non-denumerable collection of points, this cannot serve as a constitutive definition of a line. Rather the line constitutes the collection of points. Collections of this kind have a dependent meaning. This becomes apparent if one tries to assign a number to a collection of points on a line segment. It can easily be proved that there exists a one-to-one correspondence between the points of this line segment and the points of any other line segment, regardless of their relative length. Therefore, length, as an objective magnitude of the line segment, has no relation whatsoever to the number of points on the line segment.
Time and again
2.7. Spatial subject-subject relations
There is a spatial relation between two subjects if they are bound together in a common spatial manifold. Thus the spatial order is coexistence, static simultaneity, or equivalence,[76] and the corresponding subject-subject relation is relative spatial position. In the kinematic modal aspect simultaneity has only a limited, analogical meaning, as is shown in the theory of relativity, whereas in the numerical order of before and after simultaneity is absent. Consider an (n–1)-dimensional boundary in an n-dimensional space, described by a continuous function f(r)=0, where r denotes the vector, ranging over all points in the n-dimensional space. All points on one side of the boundary are characterized by f(r)>0, and all points on the other side by f(r)<0. This shows once more that the concept of a boundary (a spatial object) refers to the numerical order of smaller and larger. With respect to this quasi-serial order, all points with vector r, such that f(r)=a, are equivalent. They simultaneously lie in the same (n–1)-dimensional manifold objectified by this equation.
Just as numerical relations are subjected to a serial order (2.2), spatial relations are subjected to an order of equivalence. A relation R(A,B) over a set is an equivalence relation if for any two elements A and B of the set either R(A,B) or not, and if R(A,B) is reflexive, symmetric, and transitive.[77]
All elements which are equivalent with a certain element A constitute the equivalence class of A. It is a sub-set of the whole set over which the equivalence relation R is defined. It can be shown that if this is the case there must be some property by which different equivalence classes in the same set can be distinguished. For instance, the equivalence classes of parallel lines in an Euclidean space can be distinguished by their relative direction.
Spatial figures
Consider a simple spatial problem: in which ways can spatial figures differ or be equivalent? Generally speaking, by their shape, their magnitude, and their relative position. If two subjects have the same shape are called similar. If they also have the same magnitude (area or volume) they are called congruent. The concept of magnitude refers back the numerical modal aspect and, more specifically, to the operation of addition: if we take two disjoint subjects together, we have to add their magnitudes. The concept of similarity is an equivalence relation, but it clearly does not lead to a universal ordering of spatial subjects. The concept of magnitude allows us to find such an order, but this has a numerical, not a spatial character. Only spatial position can be qualified as an irreducible, universal, spatial subject-subject relation.
If two subjects are congruent, they can only differ in their position because otherwise they must be identical. Two subjects may have parts in common, they may have nothing more than a boundary in common, or they may be completely disjoint. Otherwise, it is difficult to use the concept of relative position (although it is probably intuitively clear) without an objective description – namely, the distance and relative orientation of the two subjects. The shape of a subject is also determined by the relative position of its boundaries, just as its magnitude. Relative position is subjected to the order of equivalence: the subjects considered should have the same dimension, and must be in the same manifold – these are equivalence relations.
Spatial figures can be objectified by their boundaries, in the simplest case by spatial points – for instance, a triangle by its vertices. If the shape of a subject is given, n points are needed to objectify the position of an -dimensional subject in an n-dimensional manifold. As a consequence, the relative position of two subjects is objectified by the distances of the corresponding pairs of such points. This determines the relative distance as well as the relative orientation of the subjects. Thus the distance of two spatial points (besides the angle between two lines) is an objective, spatial relation.
Time and again
2.8. Objectivity in
the choice of coordinate systems
The Euclidean metric defined above is independent of the choice of the Cartesian coordinate system. It is not affected by any translation (or displacement), rotation, or inversion of the latter. I shall discuss this statement because the natural sciences claim to be objective, and because its relevance is called into question by modern and postmodern conventionalist authors.
The possibility of assigning real numbers to points on a straight line depends on the one-to-one correspondence between the numerical addition group of real numbers and the spatial addition group of line segments on a straight line. This correspondence is not unique in two senses: one is free to choose a unit, as well as to choose the common end point of the set of line segments. Objectivity requires that the distance between two points (the objective relation between two spatial subjects) be independent of this arbitrary choice. This is expressed by saying that the distance is invariant under the translations of the coordinate system: the space is homogeneous. All possible displacements form a group, isomorphic to the group of all spatial difference vectors.
When a zero point has been chosen, one is still free to choose a point to which to assign the number 1. This arbitrariness is limited by the requirement that the distance between two spatial points be independent of rotations of the coordinate system around any axis and about any angle. This is called the isotropy of space. This implies that the unit be the same along all coordinate axes. The set of all possible rotations in a plane forms a commutative group. Rotations around different axes in more-than-two-dimensional space form a non-commutative group.
Having chosen a set of coordinate axes and a unit, one is still free to assign the plus and minus directions on each axis. This results in inversion symmetry, the operation under which the distance must be invariant. The rotations together with the reflections form the full orthogonal group. Each finite translation or rotation can be obtained as the result of a continuous motion. However, this is not the case with inversion, which refers back to the numerical order of before and after. This implies that it will not always be possible to bring congruent spatial figures to coincide merely by a combination of translations and rotations. For example, the right- and left-hand gloves of a pair cannot replace each other.
By changing the unit, all distances are changed in the same ratio. All possible transformations of the unit form a multiplication group which is isomorphic to the multiplication group of positive real numbers. Therefore, by changing the unit, all distance ratios must remain the same. Distances should be geometrically independent of the choice of the unit of length, but this cannot be accounted for by a numerical analysis alone. In the theory of number vectors there is nothing of this kind: units do not occur in number theory. The meaning of the spatial subject-subject relation is determined by the irreducible meaning of the spatial relation frame, and cannot be reduced completely to the numerical relations which objectify spatial relations. From an arithmetical point of view, the replacement of the metre by the centimetre as a unit of length causes all distances to become a hundred times larger. Transformations of this kind are sometimes called trivial, but they are not, since they express the mutual irreducibility of the numerical and the spatial modal aspects.[78]
These invariance properties are not only relevant to distances, but also clarify the concepts of congruence and similarity. Two spatial figures (irrespective of their relative position) are congruent if the one can be transformed into the other by an operation belonging to the full group of translations, rotations, and inversion. Two figures are similar (having the same shape) if besides such an operation all linear dimensions of one figure must be multiplied by a real number in order to arrive at the same result. This implies that if two figures are congruent or similar, they remain so under any transformation of the coordinate system of the types discussed here.
The standard Euclidean metric is invariant under translations, rotations, and the inversion of the coordinate system. In contrast, one can show that any other metric singles out a particular point, line, plane, or direction. Thus we can say that the standard metric represents the isotropy and homogeneity of space, which are assumed here because only spatial relations between subjects are relevant, and not the ‘absolute position’ of any subject.
The metric is only dependent on the choice of the unit. This arbitrariness reflects the amorphousness of space, by which we mean that we cannot assign a certain amount of points to a certain line segment. In fact, a one-to-one correspondence is possible between the points of any pair of intervals, irrespective of their relative lengths. Therefore, the length of an interval as expressed by a certain number, is not an intrinsic spatial property. This is properly stressed by Adolf Grünbaum in his extensive studies on the alleged conventionality of the metric.[79] Grünbaum is the main 20th-century (though moderate) proponent of conventionalism. He repeatedly refers to Henri Poincaré and Bernhard Riemann, but, in fact, conventionalism is merely a modern form of nominalism, which has its roots in the late Middle Ages and was defended by George Berkeley in the 18th and Ernst Mach in the 19th century.[80] Grünbaum uses the amorphousness of space as an argument for the equivalence of all conceivable coordinate systems, but does not admit that some coordinate systems should be preferred if they express the symmetry properties of space.
In the non-standard metric of a semiplane discussed by Grünbaum, the distance is not invariant under a translation of the coordinate system along the y-axis.[81] The non-standard metric which he discusses elsewhere[82] is not invariant under rotations of the coordinate system. As Grünbaum rightly observes, the assignment of real numbers to spatial points only effects a coordinatization, not a metrization of the manifold.[83] However, his non-standard metrizations do not define proper spatial subject-subject relations. When a third spatial subject (the coordinate system) is used to objectify the spatial relations between two subjects, a metrization is required which keeps this spatial relation independent of the position of that third subject. This is a requirement of objectivity which presupposes the homogeneity and isotropy of space, that is, rejection of any absoluteness of space with respect to position or direction.[84]
This does not mean that other metrizations should be rejected in all circumstances. Often they are very useful (e.g., polar coordinates for spherical-symmetric problems). This actually reverses the argument. Instead of agreeing with Grünbaum that Cartesian coordinate systems are only used because they are often more convenient than others, non-standard metrics are only applied if it is convenient in certain circumstances. A unique property of the standard metric is its invariance under translation, rotation, and inversion. This is not the case because of some convention, but follows from the homogeneity and isotropy of space. Grünbaum has paid too much attention to the amorphousness of space, which implies the arbitrariness of the unit, and has neglected the symmetry properties inherent to Euclidean geometry reflecting those of space.
Grünbaum’s remarks could be accepted if they were related to topology, in which, e.g., one does not distinguish between a sphere and an ellipsoid, or a rectangle and a parallelogram. Topology differs from metrical geometry because it lacks a metric. The theorems of topology hold for a figure regardless of how it is deformed in homogeneous strain. Grünbaum, however, directs his conventionalist views to metrical space.
Time and again
2.9. The dynamic development
of the spatial relation frame
The metric depends on the symmetry of space. In an Euclidean space, Pythagoras’ law determines the metric. Since the beginning of the 19th century, mathematics acknowledges non-Euclidean spaces as well (2.5). Preceded by Carl Friedrich Gauss, in 1854 Bernhard Riemann formulated the general metric for an infinitesimal small distance in a multidimensional space.
The metric is determined by the symmetry of the space, even if it is developed into kinetic space as in the theory of relativity, or into the physical space called a field. A well-known example is the general theory of relativity, being the relativistic theory of the gravitational field.[85]
The above criticism of Grünbaum’s conventionalist views also pertains to non-Euclidean manifolds. Non-Euclidean manifolds are in general less symmetric than Euclidean ones. Grünbaum seems to overlook this. Only by tacitly assuming that the said requirement of objectivity (i.e., that the relative position of two subjects be independent of the choice of the reference system) is satisfied is it possible to describe the nature of a manifold by its metric. This requirement is satisfied in Euclidean space by the rotation, translation, and inversion invariance of its metric. In non-Euclidean space one must either have similar intrinsic symmetries (as in the case of a spherical surface), or refer to some extrinsic instance – for example, to an Euclidean space of higher dimension, or to a rigid body,[86] or to kinematic motion, or to gravity, as is done in relativity theory.
In Gauss’ theory of curved manifolds, showing that the metric can be derived without reference to an outside system, he tacitly assumed that the unit in the orthogonal directions and at different positions is the same. The metric, and thus the Gaussian curvature depend on the method of measuring lengths adopted on the manifold.[87] Thus one can either start with the symmetries of the manifold, and require that the metric be invariant under the allowed symmetry operations, as is the case for Euclidean or spherical geometry, or start with a rigid definition of length in order to investigate the structure of that manifold. One cannot have it both ways.
Non-Euclidean manifolds can be understood in two ways: as an (n–1)-dimensional boundary of an n-dimensional spatial subject (e.g., a spherical surface), or as a manifold whose metric is determined by kinematical or physical laws (as e.g. in relativity theory). In the latter case the homogeneity and isotropy of space are relativized by those non-spatial laws. Motion as well as physical interaction causes a break of symmetry in spatial relations. In the former case they are relativized by the n-dimensional subject whose (n-1)-dimensional boundary functions as a manifold. In both cases the spatial relations between subjects bounded to such a manifold become non-Euclidean because of some restriction, like a boundary condition. This relativization is characteristic for the dynamic development of a relation frame. In kinematics or in physics, one speaks of a field as soon as the spatial isotropy and/or homogeneity is lost. A field may either be homogeneous, if it is not isotropic, or it may be neither homogeneous nor isotropic.
Hence Euclidean geometry may be considered as having an original spatial meaning, whereas the meaning of non-Euclidean geometry is found by reference either to the numerical modal aspect (in the concept of a boundary), or to the kinematic and the physical aspects.
Multiply connected manifolds
The spatial modal aspect can also be developed on the law side by the introduction of multiply connected manifolds. In the simplest case, a linear manifold is open if, for three points, there is one and only one point which lies between the other two. This is the case, for example, with a straight line or a parabola. A linear manifold may also be closed (a circle) or self-intersecting (a lemniscate). Two-dimensional manifolds may be simply connected (e.g., a plane) or multiply connected (e.g., a plane with a hole, a sphere, or a torus). In this case a criterion for being simply connected is given by the concept of contraction. A two-dimensional manifold is called simply connected if any point and any closed curve meet the following two-part criterion: one can uniquely determine whether the point lies inside the curve, and if that is the case, whether the curve can be continuously contracted without leaving the manifold. The surface of a sphere is not simply connected because it fails the first part of the criterion. The surface of a torus does not meet either part of the criterion. In a similar way simply-connectedness can be established for higher-dimensional manifolds, i.e., with the help of the concept of a boundary. Therefore these criteria of connectedness have an objective character.
Multiply connected manifolds are not irrelevant to physics. The gravitational fields and electric fields are simply connected, but the magnetic field around a current bearing conductor is multiply connected. As a consequence, a static electric field can be described by a potential, but a magnetic field cannot.
[1] For instance Zermelo in 1908, quoted by Quine 1963, 4: ‘Set theory is that branch of mathematics whose task is to investigate mathematically the fundamental notions of ‘number’, ‘order’, and ‘function’ taking them in their pristine, simple form, and to develop thereby the logical foundations of all of arithmetic and analysis.’ See Putnam 1975, chapter 2.
[2] Shapiro 1997, 98: ‘Mathematics is the deductive study of structures’.
[3] Beth 1944a, 115.
[4] This reference system cannot be finite because of its abstract and universal character.
[5] Dooyeweerd NC, II 79ff; Cassirer 1910, 47-54.
[6] Cf. Beth 1944a, 61, 67, 68.
[7] See, for instance, Beth 1944a and Russell 1919.
[8] Beth 1944a, 72.
[9] Russell 1919, 29.
[10] Dooyeweerd 1940, 167, 168; NC II, 79.
[11] Russell 1919, 15; Carnap 1939, 38ff.
[12] Peano took 1 to be the first natural number. Nowadays one usually starts with 0, to indicate the number of elements in the zero set. Starting from its element 0, the set of integral numbers can also be defined by stating that each element a has a unique successor a+ as well as a unique predecessor a-, if (a+)- = a, see Quine 1963, 101.
[13] In the decimal system 0+=1, 1+=2, 2+=3, etc., in the binary system 0+=1, 1+=10, 10+=11, 11+= 100, etc. From axiom 2 it follows that N has no last number.
[14] The fifth axiom states that the set of natural numbers is unique. The sequence of even numbers satisfies the first four axioms but not the fifth one. On the axioms rests the method of proof by complete induction: if P(n) is a proposition defined for each natural number n > a, and P(a) is true, and P(n+) is true if P(n) is true, then P(n) is true for any n > a.
[17] Quine 1963, 107-116.
[18] In 1931, Gödel (see Gödel 1962) proved that any system of axioms for the natural numbers allows of unprovable statements. This means that Peano’s axiom system is not logically complete.
[19] Putnam 1975, xi: ‘… the differences between mathematics and empirical science have been vastly exaggerated.’ Barrow 1992, 137: ‘Even arithmetic contains randomness. Some of its truths can only be ascertained by experimental investigation. Seen in this light it begins to resemble an experimental science.’ See Shapiro 1997, 109-112; Brown 1999, 182-191.
[20] Goldbach’s conjecture, saying that each even number can be written as the sum of two primes in at least one way, dates from 1742, but is at the end of the 20th century neither proved nor disproved.
[21] From the set of natural numbers 1 to n, starting from 3 the sieve eliminates all even numbers, all triples, all quintets except 5, (the quartets and sixtuplets have already been eliminated), all numbers divisible by 7 except 7 itself, etc., until one reaches the first number larger than Ön. Then all primes smaller than n remain on the sieve. For very large prime numbers, this method consumes so much time that the resolution of a very large number into its factors is used as a key in cryptography. There are much more sequences of natural numbers subject to a characteristic law or prescription. An example is the sequence of Fibonacci (Leonardo of Pisa, circa 1200). Starting from the numbers 1 and 2, each member is the sum of the two preceding ones: 1, 2, 3, 5, 8, 13, … This sequence plays a part in the description of several natural processes and structures, see Amundson 1994, 102-106
[22] Quine 1963, 30-32 assumes there is no objection to consider an individual to be a class with only one element, but I think that such an equivocation is liable to lead to misunderstandings.
[23] A well-known paradox arises if a set itself satisfies its prescription, being an instance of self-reference. The standard example is the set of all sets that do not contain themselves as an element. According to Brown 1999, 19, 22-23 restricting the prescription to the elements of the set may preclude such a paradox. This means that a set cannot be a member of itself, not even if the elements are sets themselves.
[25] This is a consequence of the axiom stating that two sets are identical if they have the same elements.
[27] Cassirer 1910, 49.
[28] In mathematics, the theory of groups became an important part of Felix Klein’s Erlanger programm (1872) on the foundations of geometry.In physics, groups were first applied in relativity theory, and since 1925 in quantum physics and solid state physics. Not to everyone’s delight, however, see e.g. Slater 1975, 60-62: about the ‘Gruppenpest’: ‘… it was obvious that a great many other physicists were as disgusted as I had been with the group-theoretical approach to the problem.’
[29] In general, ABBA. If AB=BA, the group is called commutative or Abelean (after N.H. Abel).
[30] Cassirer 1910, 55, 56.
[31] Poincaré 1906, 8.
[32] For the introduction of the set of natural numbers or the group of integers, we only need to specify one member, the number 1. All other integers are generated according to the group operation of addition. For the introduction of the multiplication group of positive rational numbers, we have to rely on the set of prime numbers, and hence on the full set of natural numbers (which can only be defined with the addition as a group operation), because of the theorem that the number of prime numbers is infinite.
[33] If a, b, c, and d are integers, the group-theoretical approach demands that a/1 = a, etc. Hence, the addition of the rational numbers must be defined as a/b+c/d=(ad+bc)/bd, in order to arrive at the result that a/1+b/1=a+b.
[35] Courant 1934, 59, 60. Although there exists a one-to-one correspondence between the integers and the rational numbers, their groups are not isomorphic.
[37] Philosophers do not generally recognize the importance of dense sets for the transition of rational numbers to real numbers.
[38] Grünbaum 1968, 13.
[40] By multiplying a single irrational number like pi, with all rational numbers, one finds already an infinite, even dense, yet denumerable subset of the set of real numbers. Also the introduction of real numbers by means of ‘Cauchy-sequences’ only results in a denumerable subset of real numbers.
[41] This procedure differs from the standard treatment of real numbers, see e.g. Quine 1963, chapter VI.
[43] It is not difficult to prove that the points on two different line segments correspond one-to-one to each other.
[44] Courant 1934, 39, 40, 60.
[45] Up till the end of the 19th century, the distinction between denseness and continuity was not clearly recognized, see Grünbaum 1968, 13. In the past, continuity was sometimes defined as ‘infinite divisibility’, but this leads only to denseness.
[46] Boyer 1939, 284-290. To avoid this pitfall the modern approaches of Weierstrass, Cantor, Dedekind, and Russell have been institutionalized.
[47] Dooyeweerd NC, II 79, 88, 170ff, 383; see also Strauss 1970-1971. In fact, this is not quite a new view: for some time, the negative numbers were called ‘numeri absurdi’, ‘aestimationes falsae’ or ‘fictae’, the irrational numbers ‘numeri surdi’, and the complex numbers are still called ‘imaginary’; cf. Beth 1944b, 72, 73.
[48] Beth 1948, 34ff; Russell 1919, chapter 7.
[49] Beth 1944a, 155.
[50] Cf. Beth 1944b, 50ff.
[51] Beth 1950, 77ff; 1944, 23ff.
[52] (a) The sum of two vectors is a vector defined as a+b = (a1+b1,a2+b2,…,an+bn).
(b) The product of a vector with a real number c is a vector defined as ca = (ca1,ca2,…can).
(c) Introducing the n unit vectors (1,0,0,…0), (0,1,0,…0), … (0,0,0,…1), any vector can be written as a = a1(1,0,0, … 0) + a2(0,1,0,…0) + … + an(0,0,0,…1).
[53] Because of its relevance to physics it may be recalled that the complex numbers can also be represented in other ways by a pair of real numbers. The most important is the representation in terms of sine and cosine functions, or equivalent, as an exponential function. If a=p.cos x, and b=p.sin x, then a+bi=(a,b)=p.cos x+pi.sin x=p.exp.ix.
The norm of this complex number is |p|, and x is called the phase of the complex number. For any integer n, p.exp i(x+n.2π)=p.exp.ix.This representation is especially convenient with respect to multiplication: (p.exp ix)(q.exp iy)=pq.exp i(x+y).
[54] Beth 1944b, 42.
[55] Beth 1944b, 67.
[56] The quantum mechanical state space is called after David Hilbert, but invented by John von Neumann, in 1927.
[57] (1) If a and b are arbitrary complex numbers, and f1, f2, and f3 are arbitrary members of the set, then g=af1+bf2 is also a member of the set, which is therefore a group under addition.
(2) There exists a functional (f1,f2) called the scalar product, which is a finite complex number, such that:
(a) (f1,f2)=(f2,f1)*
(b) (af1,bf2)=a*b(f1,f2)
(c) (f1+f2,f3)=(f1,f2)+(f2,f3)
(d) (f1,f2+f3)=(f1,f2)+(f1,f3): the scalar product is a linear functional.
The norm ||f|| of the function f is a real non-negative number defined by
If (f1,f2)=0 we call f1 and f2 orthogonal, which implies that they are mutually independent. There exists a maximum number m of mutually independent and normalized functions n1,n2,…,nm, such that (ni,ni)=1 for i=1,2,3,…,m, and that (ni ,nj)=0 if i≠j for i,j=1,2,3,…,m. This implies that any function f in the set can be written as f=a1n1+a2n2+…+amnm, where a1,a2,…am are complex numbers, ai = (f,ni).
With respect to the basis (the set n1,n2,…,nm) f can be written as the vector f=(a1,a2,a3,…am). The basis is not unique. In fact, the number of possible bases for a Hilbert space is infinite.
[58] Jauch 1968, 24.
[59] Since the beginning of the 19th century, projective geometry is developed as a generalization of Euclidean geometry.
[60] Shapiro 1997, 158; Torretti 1999, 408-410.
[61] e.g. Bourbaki, pseudonym for a group of French mathematicians. See Barrow 1992, 129-134; Shapiro 1997, chapter 5; Torretti 1999, 412.
[62] A graph is a two- or more-dimensional discrete set of points connected by line stretches.
[63] This is not the case with all applications of numbers. Numbers of houses project a spatial order on a numerical one, but hardly allow of calculations. Lacking a metric, neither Mohs’ scale of hardness nor Richter’s scale for earthquakes leads to calculations.
[64] Allen 1995.
[65] Galileo 1632, 20-22.
[67] In an Euclidean space, the scalar product of two vectors a and b equals a.b=ab cos a.
[68] Van Fraassen 1989, 262.
[69] If the coordinates of two points are given by (x1,y1,z1) and (x2,y2,z2), and if we call Dx=x2x1 etc., then the distance Dr is the square root of Dr2=Dx2+Dy2+Dz2. This is the Euclidean metric.
[70] Non-Euclidean geometries were discovered independently by Lobachevski (first publication, 1829-30), Bolyai and Gauss, later supplemented by Klein. Significant is to omit Euclides’ fifth postulate, corresponding to the axiom that one and only one line parallel to a given line can be drawn through a point outside that line.
[71] Riemann’s metric is dr2=gxxdx2+gyydy2+gxydxdy+gyxdydx+… Mark the occurrence of mixed terms besides quadratic terms. In the Euclidean metric gxx=gyy=1, gxy=gyx=0, and Δx and Δy are not necessarily infinitesimal. See Jammer 1954, 150-166; Sklar 1974, 13-54. According to Riemann, a multiply extended magnitude allows of various metric relations, meaning that the theorems of geometry cannot be reduced to quantitative ones, see Torretti 1999, 157.
[73] In the general theory of relativity, the co-efficients for the four-dimensional space-time manifold form a symmetrical tensor, i.e., gij=gji for each combination of i and j. Hence, among the sixteen components of the tensor ten are independent. An electromagnetic field is also described by a tensor having sixteen components. Its symmetry demands that gij=-gji for each combination of i and j, hence the components of the quadratic terms are zero. This leaves six independent components, three for the electric vector and three for the magnetic pseudovector. Gravity having a different symmetry than electromagnetism is related to the fact that mass is definitely positive and that gravity is an attractive force. In contrast, electric charge can be positive or negative and the electric Coulomb force may be attractive or repulsive. A positive charge attracts a negative one, two positive charges (as well as two negative charges) repel each other.
[74] Dooyeweerd NC, II 383ff; Dooyeweerd’s statement that an object in some modal aspect cannot be a subject in the same modal aspect is obviously wrong.
[75] Cp. Suppes 1972, 310.
[76] Dooyeweerd 1940, 166; NC II, 85; Leibniz already considered space and time as orders of coexisting and successive things or phenomena. Cf. Jammer 1954, 4, 115; Whiteman 1967, 383; Čapek 1961, 15ff.
[77] R(A,A) for all A; if R(A,B), then R(B,A); if R(A,B) and R(B,C), then R(A,C).
[78] The arbitrariness of the choice of the unit, sometimes called ‘gauge invariance’ must not be confused with the so-called ‘magnitude invariance’, according to which many properties of, e.g., spatial figures only depend on their shape and not on their magnitude. The former invariance is universally valid while the latter has a far more limited validity. In particular, it is false for typical relations, such as the size of atoms. See Čapek 1961, 21-26.
[79] Grünbaum 1968, 12, 13.
[80] See Kolakowski 1966, chapter 2 and 6. For a critique of conventionalism, see Popper 1959, 78ff, 144ff; Friedman 1972.
[81] Grünbaum 1963, 18ff; 1968 16ff.
[82] Grünbaum 1963, 98ff; Grünbaum, in Henkin et al. (eds.), 204-222.
[83] Grünbaum 1963, 16; 1968, 34.
[84] It should be noted that my critique is not quite appropriate to Grünbaum’s alternative metrization mentioned above. His semi-plane is only symmetric with respect to translations along the x-axis, and reflections with respect to the y-axis. His non-standard metric reflects these two symmetries just as well as the standard metric does. But then a semi-plane is not a very interesting example, in particular not for Grünbaum’s purposes.
[86] Grünbaum 1963, 8ff; Beth 1950, 71.
[87] Nagel 1961, 244, 246.
Chapter 3
Metric and measurement
1.1. Measurement
1.2. Comparative properties
1.3. Scales
1.4. The metric
1.5. Units
1.6. The theoretical character of the metric
1.7. Equivalence, measurement, and the spatial aspect
1.8. Extensive properties: mass
1.9. Intensive properties: temperature
1.10. The spatial and temporal metrics
1.11. The relevance of the metric for the dynamic development of physics
1.12. Absolute and relative space, time, and motion
Time and again
3.1. Measurement
In our discussion of the numerical and the spatial modal aspects, both the theory of groups and the concept of isomorphy played an important part. We used the theory of groups as a mathematical theory of relations, and the concept of isomorphy as a mathematical expression of the philosophical concept of the projection of relations of one kind on those of another. In the present chapter, we want to show that both are very important for the understanding of the mathematical opening up of the physical sciences. Especially since the 17th and 18th centuries, physicists have tried to find numerical and spatial objective descriptions of kinematic and physical relations. The concept of isomorphy enables us to introduce numerical and spatial representations of kinematical and physical states of affairs. The theory of groups provides us with operational definitions of the metrics of such representations, and allows us to find mathematical theories for kinematics and physics. Together they form the basis of measurement, and hence of modern empirical science.
Measurement is at the heart of the physical sciences, and therefore it seems justified to devote an entire chapter to its problems. Moreover, it gives us an opportunity of showing the power of the basic distinctions with which we started our investigations even before we apply them to physics and kinetics.
Measurement is the establishment of objective relations between subjects under a law. In science we must find out which modal subject-subject relations can be objectified, and to which law (which metric) that relation is subjected. We also have to discover which characters are most suited to provide us with standards of measurement. Only then can we discover in which way modal relations are individualized in typical structures.
Measurements are always performed with concrete existing subjects. However, we are mainly interested in universal, modal relations and, therefore, we need a theory to provide a bridge between typicality and modality.
The aim of measurement in physics is to obtain an objectification of physical states of affairs: to represent physical subject-subject relations by modal numerical or spatial relations. In the former case this means a quantification, in the latter case it can mean a representation in graphs. A direct quantification can be achieved by counting. An indirect one is obtained by measurement, the comparison of subjects which are comparable because they have an objective property in common.
The possibility of performing experiments and doing measurements is largely responsible for the growth of the physical sciences in modern times. One may wonder why this growth is not present to a greater extent in the social sciences. One reason, of course, is the difficulty of designing relevant experiments because of ethical considerations. There is a second reason, however, which is perhaps more important. It is the lack of a modal metric in the post-physical modal aspects. We shall see that the metric is the indispensable law side of measurement. If our view is correct, the problems encountered in the social sciences with respect to measurements and their interpretation[1] are largely due to an absence of this metric.
We shall not strive for completeness. Important aspects of measurement, such as the psychological (observational) and cognitive (rational) aspects, will at best be treated superficially. In this chapter we restrict ourselves to ‘classical’ measurements. Later we intend to show that the so-called measurement problem in quantum physics is not really a measurement problem, because measurements in quantum physics are performed in the same way as described in this chapter. This problem concerns, on the one hand, the application of statistical methods to a number of individual measurements, on the other hand, some ontological interpretations of interactions which occur, for instance, in measurement processes.
Time and again
3.2. Comparative properties
According to Carnap and others,[2] the classical distinction of qualitative and quantitative properties is insufficient. There is a third type, called comparative or topological properties. For instance, it is quite meaningless to seek a dichotomy behind linguistic pairs, such as long-short, heavy-light, hot-cold, small-large, fast-slow, old-young, etc. In these examples we should speak of larger than, heavier than, hotter than, etc. Therefore, one speaks of a comparative attribute if it enables us to put the objects[3] to be compared into a linear order of more or less. It is a quantitative distinction if it is comparative, and has a numerical scale. We need to distinguish one more case – viz., if the property is subjected to a metrical law, or, in short, a metric. In this case it is not only possible to assign ordering numbers to the subjects which are compared, but also to assign numbers to the subjective relations between them.
These definitions are still not complete. Not only do comparative attributes have an order of before and after, but they also have an order of equivalence. Thus, if we wish to compare any pair of subjects with respect to a comparative attribute, we must either order the two subjects in the form of a more-less statement or show that they are equivalent. For instance, two physically qualified subjects either have the same weight or one is heavier than the other. All physically qualified subjects having the same weight constitute an equivalence class with respect to the property ‘heavy’. Strictly speaking, the subjects are not numerically ordered, but the equivalence classes of their objective properties are.
For a comparative concept we require therefore that there exists an ordering relation, Rx(A, B) – for pairs of subjects A and B – with three possibilities, R+, R-, and Ro subjected to the following rules:
(a) One and no more than one of the three possibilities of Rx applies to the pair (A, B): R+, R-and Ro are alternative and mutually exclusive.
(bR+(A, B) implies R-(B, A), and vice versa, whereas Ro(A, B) implies Ro(B, A).
As a consequence, Ro applies to the pair (A, A): any subject is equivalent to itself, with respect to any attribute. Ro is a symmetrical relation, R+and R- are asymmetrical, and each other’s converse. R+ is called precedence, Requivalence.
(c) For three subjects A, B and C, each with the same attribute, if Rx(A, B) and Rx(B, C), then Rx(A, C), where Rx stands either for R+or for R-or for Ro: each relation is transitive.
The three rules (a)-(c) describe a one-dimensional quasi-serial ordering of subjects A, B, C, … with respect to the attribute Rx. This array is called ‘quasi-serial’,[4] because it is serial except for the fact that several elements may occupy the same place in it. Any attribute Rx that satisfies rules (a)-(c) will be called a magnitude. It serves to objectify the subjects to be compared.[5]
Due to precedence, a magnitude always refers back to the numerical modal aspect. But, in the first place, it refers back to the spatial modal aspect provided the magnitude is not a spatial concept itself, such as length. The subjects which are compared are at least equivalent insofar as they share the objective property used for the comparison. Moreover, not the subjects themselves but their equivalence classes are numerically ordered. Finally one can observe that a subject’s physical properties, to the extent by which they change, refer back to the kinematical aspect.
Time and again
3.3. Scales
An attribute Rx subjected to rules (a)-(c) can be objectified by assigning numbers to the equivalence classes. If we have practical means of establishing equivalence and the order of the equivalence classes, we call the magnitude measurable, and we can devise a scale, which is thus a numerical objectification of the property concerned. Without any order or equivalence, we can still use numbers to indicate subjects, for instance for the identification of football players.[6]
At the present stage, the only restriction applied to such a scale is that the order of the assigned numbers must reflect the serial order of the equivalence classes. Such a scale is by no means unique. A scale (x) can be replaced by any other scale (x’) if x’ is a monotonic function of x. We may even replace an increasing scale by a decreasing one. A special scale transformation is a linear one: x’ = ax + b (a and b are real numbers, a ≠ 0)
If a > 0, we speak of a positive linear transformation. If b = 0 and a > 0, we speak of a dilatation. If a = 0, b ≠ 0 we speak of a shift. A scale is an interval, ratio, or difference scale if it is unique with respect to positive linear transformations, dilations, or shifts, respectively.[7]
Consider, for example, Mohs’ scale of hardness, which ranges from 1 (talc) to 10 (diamond) and is defined by reference to the scratch test. A mineral A is called harder than a mineral B if a sharp point of A scratches a smooth surface of B. A and B are called equally hard if neither scratches the other.[8] Such a scale is not isomorph but homomorph. It is merely ordinal because the assignment of numbers to the equivalence classes is completely arbitrary, except for their order. In this respect it does not differ from, for example, an alphabetical ordering. In particular, the difference in hardness between two minerals designated as 9 and 10 is not related to the difference in hardness numbered 4 and 5. Neither does it make sense to say that diamond is twice as hard as a mineral with hardness 5 (apatite).
In contrast to this comparative attribute, consider the metrical attribute ‘volume’. If we compare two vessels of 990 and 1000 litre with two vessels of 100 and 110 litre, we can meaningfully say that the volume differences are the same in the two cases. The differences are equivalent to the same amount. It is also meaningful to state that a container of 1000 litres is twice as large as a 500 litre container. Indeed, the scale for volumes is not merely ordinal, but is metrical.[9] The main distinction is that in ordinal scales we can assign numbers to the equivalence classes of the ordered subjects themselves, whereas in a metrical scale it is rather the subject-subject relation which is quantified.
Time and again
3.4. The metric
Most physical scales are subjected to a fourth rule, which is not generally recognized.[10] It extends the meaning of the term Rx(A, B) from a comparative ordering relation to a quantitatively objectifiable subject-subject relation.
All subjects A’, A’’, … for which R0(A, A’), R0(A, A”), …, form an equivalence class with the subject A. Hence, we have equivalence classes R0(A), R0(B), … If R+(A, B) applies, the equivalence class R0(A) precedes the equivalence class R0(B). These equivalence classes form a serial (not a quasi-serial) order. Let us now assume that there exists some non-numerical relation between the equivalence classes R0(A), R0(B), … to be called R(A, B). In some (non-numerical) sense, R(A, B) may be equivalent with R(C, D), and hence there may exist an equivalence class R(A, B). Now we are able to formulate the following rule:
(d) The equivalence classes of the relations R(A, B) for all possible pairs of subjects and B with respect to some attribute, are elements of a group, isomorphic to a specified group of real numbers.
This isomorphism is called the metric, and a magnitude satisfying rule (d) is called metrical.[11]
The ‘specification’ includes both the interval of allowed numbers, and the group operation connecting them. In many cases the interval is just the set of all real numbers, and the group operation is addition. Then we have a difference scale, such as that for volume and mass differences. In other cases the interval consists of the positive real numbers, and the group operation is multiplication. Now we have a ratio scale, such as that for volume ratios or mass ratios. According to special relativity theory, the group of all possible relative velocities in one dimension for real moving subjects has an upper bound c, the speed of light. The addition group of two relative velocities v and w is given by the group product (v + w)/(1 + vw/c2), which follows from the properties of the so-called Lorentz group (chapter 4).
The equivalence classes R0(A), R0(B), … do not form a group with respect to a certain attribute, but the equivalence classes of their relations, R(A, B), do. For instance, if the group operation is isomorphic to addition, negative values must be included, which is not admissible for volumes, whereas it is for volume differences. If the group operation is isomorphic to multiplication one has to take volume ratios as the elements of the group, because the product of two volumes is not a volume, whereas the product of two volume ratios is again a volume ratio.
However, the equivalence classes of the subjects themselves can be considered as relations to the identity element of the group, and can thus be interpreted as a subset of the group. Thus the volume of a subject can be considered as the volume difference with a (fictitious) subject with zero volume, and the set of all volumes is isomorphic to the set of all volume ratios with a subject with unit volume.
Among magnitudes we discern measurable properties of subjects, and measurable relations between subjects, but the two properties are closely related. Properties also have a relational character, whereas relations have a property-character – compare distance (a relation) with length (a property).
Time and again
3.5. Units
For any metrical attribute, there are three ‘coordinative principles’:[12] the existence of practical means of establishing equivalence and the serial order of the equivalence classes, the metric based on a group structure, and the arbitrary choice of a unit. The isomorphism between the groups of equivalence classes of non-numerical objective relations R(A, B) and the corresponding group of real numbers does not completely define the numerical values to be assigned to the relations. In section 2.8 we have explained this for spatial magnitudes, arguing from the amorphousness of space. In fact, any metrical attribute lacks an intrinsic metric. The addition group of real numbers is itself isomorphic to the addition group which is generated by multiplying all the members of the first group (x) with an arbitrary real number c ≠ 0: x’ = cx (hence, if x1 + x2 = x3, then cx1 + cx2 = cx3). For this reason we have to assign the number 1 to some arbitrarily chosen relation R(A, B). The number 0 is given to the relation R(A, A’) for any pair of equivalent subjects A and A’. With these stipulations, i.e., with the choice of a unit, the metrical scale for addition groups is completely defined.
Likewise, for groups isomorphic to the multiplication group of real positive numbers (for which if x1x2 = x3, then x1cx2c = x3c for all c ≠ 0), we assign the number 1 to equivalence relations, and some arbitrary number to some subject A. For example, in the thermodynamic temperature scale, the temperature of the triple point of water is given the number 273.16 K, in order to have a simple relation to the customary centigrade or Celsius scale.
Often, metrical scales with a unit refer not only to a group of real numbers, but also to a number field, characterized by addition and multiplication as group operations. For example, when we speak of a length of 10 cm, we mean a length of 10 times 1 cm. For the addition of length, we need the distributive law for number fields: 3 cm + 5 cm = 3(1 cm) + 5(1 cm) = (3 + 5)(1 cm) = 8 cm. This is the background of the statement that one can only compare (by adding or subtracting) magnitudes having the same ‘dimension’ (not e.g. 1 cm + 3 cm3) and the same units (e.g., 1 m + 3 cm = 100 cm + 3 cm = 103 cm).
We see that there is still some arbitrariness in metrical scales, but compared to ordinal scales, the arbitrariness is greatly reduced. The use of a scale with a unit is only meaningful for metrical scales. A merely ordinal scale has no unit because, in this case, the assignment of numbers to equivalence classes is completely arbitrary except for their serial order.
Time and again
3.6. The theoretical character of the metric
There are several reasons for stating that the metric as introduced above has a theoretical character. The first reason is that the group structure appears as a law. This means that it is always an (empirically based) theoretical hypothesis to state that a certain attribute has a group structure. In many cases it is a modal law – i.e., the group structure is independent of the typical structure of the subjects which are objectified in the metric. Only the unit, which is arbitrarily chosen, depends on the typical structure of some subject.
The abstract hypothetical nature of the metric also comes to the fore because the group always has an infinite number of elements, whereas the number of physically qualified concrete subjects having a certain property may be finite. Thus the metric does not refer to actual but to possible relations.
Furthermore, in actual measurements it is impossible to establish equivalence exactly. We will always have to say that two physically qualified subjects, A and B, are equivalent with respect to a certain attribute within the accuracy obtained by our measuring instruments (including our sense organs). This means that in actual measurements rules (a)-(c) can be violated. For instance, if we have a balance that can only discriminate between masses differing by more than 1 gram, and we have three bodies A, B, and C, weighing (according to a more accurate balance) 9.25, 10.0, and 10.75 gram, respectively, then according to our crude balance A has the same weight as B, and B has the same weight as C, whereas according to the same balance, C is heavier than A. This violates rule (c).[13] On the other hand, the metric describes exact relations among subjects because of its mathematical structure.
It is difficult to give a definition of the notion of accuracy. Starting with Gauss, statistical mathematicians and physicists have developed rules to assign a number to the accuracy with which equivalence can be established. For example, if we say that the length of a room is (4.21 + 0.01) metre, we assume that the (in)accuracy of the measurement is 1 cm. The precise meaning of this statement, and how the accuracy can be estimated, are mostly technical matters, and will not be discussed here.[14] It is sufficient to note that in actual measurements we always have a finite accuracy. Theoretically we assume that either R+(A, B), or R-(A, B), or R0(A, B) applies, and that these properties are transitive. In experimental physics R0(A, B) states: within a certain accuracy, A and B are equivalent with respect to R. But this property is no longer transitive. Thus the rules (a)-(d) form a theoretical, rather than an empirical, basis for the metric, and, in fact, for the whole of physics.
In another respect the above example also shows that the metric is abstract: the metric expressly refers to a group of real numbers. But the example indicates that measurements can only yield rational numbers, i.e., decimal numbers with a finite number of decimals.[15] Again theoretical considerations allow us to assume that, e.g., length must be assigned real numbers. Theoretical geometry, not experimental geometry, proves the length of the diagonal of a unit square is equal to √2. Magnitudes as retrocipatory projections can only refer back to the disclosed numerical modal aspect, i.e., to real numbers or vectors with real number components (2.3).
It is sometimes suggested that we only use real numbers for magnitudes because of convenience. Only given real numbers, for example, it is possible to differentiate and integrate functions. However, different metrics are related. Given metrical magnitudes of some kind for a subject, one can calculate metrical magnitudes of another kind for the same subject. Thus if we know the mass m and the velocity v of a subject, we also know its momentum mv and its kinetic energy ½mv2. This statement would lose its meaning if the quantities of mass, velocity, energy, and momentum did not refer to metrical scales. It also shows that the units for energy and momentum are related to those for mass and velocity. But a superficial inspection of the formulas relating these measurable properties shows that they would also lose their meaning if it were required that they should be represented by rational numbers. Only a real number spectrum can accomplish this.
Finally, we observe that it is not always possible to use the same experimental method to determine equivalence. Extreme operationalists maintain that if we use different methods of measurement, we, in fact, measure different magnitudes.[16] Indeed, it is a matter of theory to connect the results of such measurements.
Time and again
3.7. Equivalence, measurement,
and the spatial aspect
The notion of equivalence does not mean that the equivalence classes with respect to every attribute can be ordered in a single linear order. A typical counter-example is the essentially two-dimensional ordering of the equivalence classes of different colours perceived by the human eye.
Also the relative spatial positions of subjects, and forces, can only be measured if they are first decomposed into their spatial components. In these cases, we have multi-dimensional groups, and we must have recourse to a multi-dimensional metric. In a few cases the metric is complex-numbered, such as the impedance in alternating current theory.
In an analogical way the thermodynamic state of a physical system is determined by a set of extensive parameters (5.2). Thus it may occur that two systems are partly equivalent (e.g., having the same volume but different energy) or completely equivalent in a physical sense (still having different positions or velocities). All this is possible only because the concept of equivalence itself refers to the spatial order of simultaneity. The numerical order of more less does not contain equivalence.
All measurements are based on the establishment of equivalence. This means that among measurements two types come to the forefront. First those based on a direct comparison of spatial position (coincidence). Every measuring instrument with a visible scale ultimately depends on this type (3.10).
The other type depends on a physical analogy with the spatial modal aspect. We shall see that force is a retrocipatory projection of physical interaction on spatial relations (5.5). This type of measurement has two sub-types: measurement based on a balancing of forces (3.8), and measurement based on a thermodynamic equilibrium between a physical subject and a measuring instrument, such as a thermometer (3.9). In both sub-types the establishment of equivalence is based on a physical equilibrium state.
Note that velocities and currents can only be measured by their static effects. For instance, an electric current can be measured because it gives rise to a magnetic force.
According to relativity theory, in the opening process spatial simultaneity is relativized (4.5). This means that if we want to measure attributes of a subject that moves relative to the measuring instrument, we have to take into account this relativizing of simultaneity.
Time and again
3.8. Extensive properties: mass
In this section we consider those relational attributes whose interval of allowed numerical values is the set of all real numbers, with addition as the group operation. The number zero corresponds with the equivalence relation R(A, A’) for any pair of equivalent subjects A, A’. It is now possible to assign real numbers to the subjects themselves. If we call r(A, B) the number corresponding to the relation R(A, B), and n(A) the number corresponding to the subject A, both with respect to some additive attribute R, then
r(A, B) = n(A) – n(B)
The set of all possible values n(A) is not necessarily a group, but it is (or is isomorphic to) a sub-set of the group of all possible values r(A, B).
It will be clear that even if (by the choice of a unit) the value r(A, B) is uniquely established, there is some arbitrariness in the value n(A). We can add to it an arbitrary real number (which must be the same for all subjects A). This means, we are free to choose a zero point for the n-scale without any consequence for the r-scale. In some cases (e.g., length) the zero of the n-scale is obvious. In other cases (e.g., spatial position) the zero of n is completely arbitrary.
We shall now discuss the construction of the metric of an additive or extensive property. We call a property extensive, if it satisfies rules (a)-(d) mentioned above, and if
n(AoB) = n(A) + n(B)
Here the symbol ‘AoB’ means: ‘the physical sum of the subjects A and B’ – i.e., A and B combined in a physical sense, relevant for the attribute concerned. (Instead of ‘physical’ one can also read ‘spatial’ or ‘kinetic’). The validity of rule (d) implies that this combination procedure which is isomorphic to the addition of real numbers, leads to a group of relations between the subjects A, B, …, isomorphic to the group of real numbers. This combination procedure must be specified in every case. For example, consider the combination of two electrical resistors. If they are connected in series, we add their resistances, but if they are connected in parallel, we have to add their conductance.[17] (Conductivity is the inverse of resistivity). In all cases the addition rule only applies if A and B are disjoint.
Let us suppose that we have a means of determining (within a certain accuracy) whether two subjects belong to the same equivalence class with respect to some extensive property. Then we are able to determine uniquely the number r(A, B) for any two subjects A and B, as is seen in the following example. Suppose we want to compare the masses m(A) and m(B) of two physical subjects. Our measuring instrument is a balance, which allows us to see whether two subjects have the same mass, and if not, which one is heavier.[18] Now we take p bodies with the same mass m(A) and q bodies with the mass m(B), such that the first collection of p bodies balances the second set of q bodies:
|p.m(A) – q.m(B)| < ε
where ε indicates the accuracy of the balance. Accordingly,
|m(A) – (q/p).m(B)| < ε/p
So we find that the mass of A is q/p times the mass of b, within the accuracy ε/p. If m(B) happens to be equal to the unit of mass (1 kg) then the mass of A is q/p kg.[19] We observe that this measurement yields a rational number because actual measurements always have a limited accuracy.
Whether a magnitude is extensive or not is not a convention. It can be falsified by experiment.[20] It is an empirical fact that mass is an additive property, at least under certain circumstances.[21] In relativity physics it is shown that mass is only additive if the added subjects have no relative kinetic energy and no relative potential energy. Therefore the mass of a deuteron is less than the sum of the masses of its constituent particles – a proton and a neutron. On the other hand, it is not always the measurement procedure that establishes whether a certain property is extensive or not. There are many extensive properties whose numerical values can only be determined indirectly. Therefore, their metrics depend on other so-called fundamental metrics.[22] In thermodynamics two key attributes, internal energy and entropy, cannot be measured directly. In fact, a large part of a general course in thermodynamics is required to give proper account of the metrics of energy, entropy, and also temperature (which is not an extensive property).
Time and again
3.9. Intensive properties: temperature
Sometimes, all properties which are not extensive in the sense defined above are called intensive,[23] but we shall apply a more restricted definition. We shall call an attribute intensive if it satisfies the rules (a)-(d), and if either n(AoB) = n(A) = n(B) or n(AoB) is not defined.
Thus we have a meaningful interpretation for n(AoB) only if n(B) equals n(A). If this is the case, we say that A and B are in equilibrium with respect to the property designated by n.
A typical example is temperature. If we bring two physically qualified subjects into thermal contact, then they will eventually have the same temperature. As long as A and B have different temperatures, it makes no sense to speak of the temperature of their sum. The statement: ‘If a subject A is in thermal equilibrium with a subject B, and if A is in equilibrium with a third subject C, then B and C are in equilibrium with each other’ is a part of rule (c), and is not only relevant to temperature, but to any equilibrium parameter.[24]
For both intensive and extensive properties the establishment of equivalence is implied in the definition of n(AoB): we measure the temperature of a body with a calibrated thermometer as soon as we are confident that the two have the same temperature.[25] However, this method is not sufficient to determine unique relations between bodies which are not equivalent with respect to intensive parameters, as can be done with extensive parameters. Consequently, the scale for an intensive property always depends on the scales for one or more extensive parameters.
Sometimes, this dependence is easily found, as, e.g., the internal pressure of a gas. This intensive property is equal to the force per unit area exerted by the gas on the walls of its container, and force and area are both extensive parameters. Thus the calibration of a manometer is in principle a simple matter. For temperature, another well-known and important magnitude, the construction of a scale is far more complicated. We shall show this in some detail because temperature is a key concept in physics, and because the following discussion is very illuminating for our distinction of merely ordinal scales and modal metric scales.[26]
The tendency of fluids to expand on heating provides the possibility of measuring temperature by the length of, e.g., a mercury column. The mercury temperature scale is defined such that the temperature of melting ice is given the value 0oC and boiling water is assigned the value 100oC. The numerical values for other temperatures are found by linear inter- and extrapolation. In this way the temperature is reduced to the extensive scale for length measurement. Thus the temperature is assumed to be 50oC if the height of the mercury column is just halfway between the points for 0oC and 100oC.
This merely ordinal scale, though very useful, is rightly called conventional, because it depends on the typical properties of mercury. In fact, any property which depends on temperature could be used instead.[27] If we would take another liquid (like alcohol) we would define the 0 and 100 points in the same way. But now a body having a temperature of 50o according to the mercury scale would show a temperature of, let us say, 49o on the alcohol thermometer provided its scale is equally divided between 0 and 100 as is the mercury thermometer. A practical way out of this difficulty is to calibrate the alcohol thermometer against the mercury thermometer, which makes the alcohol scale non-linear, but this does not make the mercury scale less conventional since it arbitrarily assumes that mercury expands linearly on heating, and that alcohol does not.
In modern axiomatic thermodynamics, temperature is usually introduced as the derivative of energy with respect to entropy, which are both extensive properties (5.2). Given the methods of statistical physics it is then possible to design a temperature scale which is not conventional, except for the choice of the unit. The same scale can also be found by thermodynamic means, and we shall describe this older and rather elaborate method in order to stress its modal universality.[28]
This method starts with a very general principle which can be formulated in two equivalent ways. According to William Kelvin, it is impossible that the net result of a cyclical process is such that heat is completely transformed into work. According to Rudolf Clausius, it is impossible that the only result of a cyclical process is such that heat is transferred to a warmer body. Both statements are expressions of the physical order of irreversibility.
The efficiency of a cyclical process is defined as the net work (output minus input) divided by the heat input (discarding the heat lost). If we consider several different cyclical processes, all working between the same temperatures T1 and T2 (T1 > T2), then it follows from the principles of Kelvin and Clausius that no cyclical process can have a higher efficiency than a so-called Carnot cycle. This consists of two isothermal processes (at constant temperature T1, respectively T2), interspersed with adiabatic processes (during which no heat is exchanged and the temperature changes from T1 to T2, and vice versa). A Carnot cycle is reversible: it either converts heat into work, or it transports heat from a low to a high temperature reservoir. If the heat input at temperature T1 is called Q1 and the heat output at temperature T2 is called Q2, then with the help of the conservation law of energy, we find that the efficiency of a Carnot-cycle is (1 – Q1/Q2). It may be observed that until now a temperature scale is not required. It suffices to have a means of establishing whether two subjects have the same temperature, and if not, which one is hotter.
It can be shown that a reversible Carnot cycle is more efficient than an irreversible cycle working between the same two temperatures, and that two Carnot cycles working between the same temperatures have the same efficiency, irrespective of the typical structure of the processes involved. Therefore, the efficiency can only be a function of these temperatures, and it is possible to define the temperature scale such that T1/T2 = Q1/Q2. This scale is arbitrarily provided with a unit by stipulating that the temperature of the triple point of water is 273.16 K (for Kelvin).[29]
This theoretical thermodynamic temperature scale is – except for the unit – independent of any typical property whatsoever and is therefore called ‘absolute’. It is completely of a modal character.[30] It is based only on the physical time order as expressed in the Second Law of thermodynamics, and the assumption that heat (i.e., energy flux), an extensive property, can be measured directly or indirectly, which is indeed the case. This implies that we can use this modal theoretical magnitude in theoretical formulae. Thus it is only meaningful to state that the mean kinetic energy of molecules in a gas is (3/2)kT, if T does not refer to the mercury scale, but to the thermodynamic scale.
Certainly a Carnot cycle is not a practical thermometer. It is the task of thermometry to devise practical thermometers which come as near as possible to the theoretical temperature scale. For instance, by theoretical analysis it can be shown that this scale is identical to one based on the expansion of an ideal gas, which is approximated by dilute gases like helium, argon, and hydrogen, except at very low and very high temperatures. But now the order is reversed. We do not define a scale by using a thermometer with its typical properties, but we use a certain thermometer, according to its own convenience, in a certain situation. Its scale should, as nearly as possible, approximate the modal theoretical thermodynamic scale in order to give results which can be used to corroborate or falsify physical theories. Thus we conclude that the thermodynamic scale is not based on some convention, but on a theoretic analysis of physical relations.
Time and again
3.10. The spatial and temporal metrics
We have seen that the measurement of extensive properties like mass and intensive properties like temperature depends on a state of equilibrium between two subjects. Such a state is characterized by an equilibrium between two or more (generalized) forces, and we shall argue that force is a spatial analogy of physical interaction.
At first sight this does not apply to measurements of length and time. If we want to compare the lengths of two bodies which are spatially remote, we take a metre stick, first measuring the length of one subject and then the length of the second; finally we obtain the difference or ratio of the two values. But what is our guarantee that the length of our metre stick did not change between the two measurements? Why do we take a solid body as our metre stick and not a rubber string? Is the outlined procedure still valid if the temperature in the environments of the two bodies is not the same? Why do today’s physicists take the wave length of a certain spectral line as the fundamental unit of length, and not the length of the standard metre at Sèvres?
Similar questions arise with respect to the measurement of time. By an accurate measurement of time is understood the comparison of a certain time interval with a periodic system, a clock. But how do we know that a certain clock is really accurate, such that it ticks off equal periods? Why do we assume that certain clocks are more accurate than others? Do exactly periodic systems really exist?
Usually one reasons that it is impossible to base the measurement of length on the concept of a rigid body, because this would lead us into a vicious circle: to show that a body is rigid, we need rigid bodies. Similarly, to show that a clock is periodic, we need periodic systems. I shall try to make clear that the real difference is getting into this vicious circle, not getting out of it.
The conventionalist’s answer to these problems is more or less as follows. We take a large class of any kind of bodies and compare their lengths. Now under certain circumstances (e.g., equal temperature) a subclass of these bodies have invariant length ratios, whereas other subclasses do not. It is just a matter of convenience to take this subclass as the class of rigid bodies which is used as a basis for the measurement of length. Sometime criteria of simplicity and fruitfulness are added to this convention. In this framework the question cannot be posed (let alone be answered) why the physically qualified bodies of this subclass are more or less equally rigid. As Adolf Grünbaum says:
‘Only the choice of a particular extrinsic congruence standard can determine a unique congruence class, the rigidity or self-congruence of that standard being decreed by convention, and similarly for the periodic devices which are held to be isochronous (uniform) clocks.’[31]
However, this convention is too good to be true. One could conventionally assume that an atomic clock designates ‘true time’, but that does not explain why all other clocks submit themselves willingly to this ‘arbitrary’ choice. The answer of today’s physicists is quite different. They carry out an analysis of all available physically qualified structures in order to find the most stable ones, which are used as standards for measurement. For the criteria of stability, the basic spatial and kinematic laws are presupposed. In particular the spatial isotropy and homogeneity, and the uniformity of kinematical time are presupposed in the physicist’s choice of the standard of length and time.[32] This is what we mean by saying that we have to get into the circle. We assume – supported by empirical evidence – that space and time are isotropic, homogeneous, and uniform, we choose a metric that reflects these symmetry properties, we investigate typical structures of individuality to find the most stable ones, we choose a reliable standard according to our requirements, and then we check whether space and time are isotropic, homogeneous and uniform. This is a circle, but not a logical one. It is by no means logically certain whether such a procedure should inevitably lead to consistent results.
It has been discovered (empirically) that the standard metre at Sèvres, and the diurnal or annual motion of the earth do not give sufficient accuracy if subjected to the criteria of temporal uniformity and spatial homogeneity and isotropy. Therefore, today, the physical units of length and time are based on atomic structures: the typical wave length of a certain spectral line, and the period of another one. These spectral lines are due to electronic transitions, within atoms. Therefore, an absolutely stable system (if it existed), could not be used because no transition would occur in it. But as we shall see, the stability of a physically qualified system like an atom or a solid is determined by a typical balance of kinetic, potential, and exchange energy, the typicality of which is determined by the potential energy – i.e., by the acting forces. Thus spatial and temporal measurements also rely on a balance of forces, which lead to a typical stable equilibrium state.
All this is not essentially changed in general relativity theory if due account is given to the fact that Euclidean straight lines cannot be determined experimentally, and, therefore, must be replaced by geodesics.[33] If we find that metre sticks do not conform to Euclidean geometry, we can account for this either in a spatial way (assuming non-Euclidean geometry) or in a physical way (assuming a universal modal field of force, like gravitation, acting in the same way on rigid bodies and on periodic systems[34]). This again shows the spatial foundation and the physical qualification of measurement. Thus we find that the self-congruence of the standards of measurement are decreed by consistency of modal and typical laws, not by convention. If such a presumed consistency between hypothesized modal and typical laws cannot stand up to experimental tests, we have to modify our hypotheses. This is the basis of general relativity theory.[35]
Temporal intervals cannot be measured independently of presupposed spatial laws, and spatial relative positions cannot be determined without dependence on temporal laws.[36] It is impossible to define time and space independently by means of their measurement procedures because in actual physically qualified structures (such as measuring instruments or standards) all physical and prephysical modal aspects are involved.
The metrics for spatial and temporal relations are determined by two modal laws: (1) The uniformity of kinetic time, according to which all subjects move uniformly with respect to each other, insofar it is possible to abstract from their mutual physical interactions. (2) The transformation laws of spatial and temporal scales, which reflect the fact that there is no preferred reference system (spatial and temporal relations, not spatial positions and temporal moments are relevant). Both traits are found in the classical Newtonian metric and in the metric of special or general relativity, the main difference being that in the latter the spatial and temporal metrics are interrelated, whereas in the former the two are supposed to be mutually independent.
Conventionalists claim that the Newtonian metric is just as conventional as spatial or temporal scales which are rigidly connected to the typical properties of some individual system. Thus, Grünbaum[37] compares the Newtonian metric for time measurement with the scale based on the diurnal rotation of the earth. Compared with the Newtonian metric this rotation is slightly irregular, and slowing down, because of tidal friction. After an extensive discussion, Grünbaum concludes that
‘… apart from pragmatic considerations, the diurnal description enjoys explanatory parity with the Newtonian one’.[38]
These ‘pragmatic considerations’ include the fact that in the latter metric the physical and kinematic laws can be more conveniently expressed in mathematical terms. Citing Feigl and Maxwell, he says:
‘… one of the important criteria of descriptive simplicity which greatly restrict the range of ‘reasonable’ conventions is seen to be the scope which a convention will allow for mathematically tractable laws.’[39]
In this discussion, Grünbaum seems to overlook the group structure of the Newtonian metric, which implies, e.g., that 10 seconds now is as long as 10 seconds tomorrow, in the following sense. Suppose we wish to repeat an experiment in which it is crucial that its duration is 10 seconds. Then we will find (other things being equal) the same result today and tomorrow, or at any other time. This would not be the case, if we would measure time on a diurnal scale (at least if our accuracy is high enough to detect the difference between this scale and the Newtonian metric). The result of nearly every physical experiment would depend on the moment it is done.[40] A conventionalist also rejects the use of such a ‘particular’ scale because it is more convenient to refer to the larger system of the ‘rest of the universe’.[41] But then the definition of the scale is (apart from its epistemological aspects) still a purely subjective matter. For us, the choice of the metric depends on the modal law-subject relation. There happens to be a metric which is universal, not because it is applicable ‘everywhere in the universe’, but because it appears as a natural law.[42]
It is not at all interesting to find scales which depend on the typical individuality of some physically qualified subject. Far more interesting is the possibility of finding a modal metric – i.e., a scale that does not depend on the typical structure of some individual system, and which has a group structure. Only then can an objective representation of physical states of affairs be warranted. To declare that all possible non-metrical scales are on a par with modal metrical scales, and that the use of the latter is just a matter of convenience, is a gross depreciation of some of the greatest discoveries in the history of science: the isotropy and homogeneity of space, and the uniformity of time.
The conventionalist’s claim is based on the true but irrelevant statement that there are no logical grounds for accepting one scale above another one. Reichenbach[43] says:
‘It is a matter of fact that our world admits of a simple definition of congruence, because of the factual relations holding for the behaviour of rigid bodies; but this fact does not deprive the simple definition of its definitional character.’
However, these ‘factual relations’ are subjected to typical laws, which can be analysed with the help of modal laws, and the ‘conventional definitions’ are based on these laws. It is relevant that there are physical grounds for preferring metrical scales to merely ordinal ones.
Time and again
3.11. The relevance of the metric for the dynamic development of physics
Physical measurements are either based on direct comparison with the help of physical scales or on the establishment of a state of equilibrium between two or more forces. Thus we use the force exerted by the earth on any physically qualified subject to measure its mass. A typical instrument is a balance originally designed to compare the masses (or weights) of different bodies. But we can also use this balance to measure other forces. We can compare the force of a spring or an electromagnetic force with gravitational forces. This possibility suggests that ‘force’ is a modal, non-typical concept, such that actual forces, though being of a different typical nature, can balance each other, and thus be measured.
It is one of the aims of science to analyse typical structures in terms of their modal aspects. Therefore, it must be possible to reduce measurements of typical properties of (and typical relations between) concrete things and events to modal relations. Indeed, it is possible to relate measurements of different kinds to each other (we gave several examples above). If this were not the case, we would need a separate scale (including a separate unit) for every kind of measurable property. Scales can only be interrelated if they are subjected to a known metric. In physics theoretical analysis (confirmed by experiment) has shown that the number of so-called fundamental units or irreducible metrics is limited to three modal ones (e.g., length, time, and mass or energy) and a small number of units referring to the so-called fundamental interactions. The best known of the latter is the unit of electric charge (or, alternatively, electric current).
We have already discussed that the choice of the standards for measurement is based on a modal analysis of typical structures. But the introduction of metrical scales for all relevant properties is a necessary prerequisite for a modal analysis of typical structures of individuality. In this analysis the typical relations are analysed into modal subject-subject relations, which can be objectified numerically because they are subjected to a modal metric. A modal metric is also indispensable for theoretical synthesis – i.e., the reconstruction of typical structures of individuality from known modal and general typical laws. In this respect, the modal metric for physical subject-subject relations is just as relevant for quantum physics as it is for classical physics. The success of physics as a so-called exact science testifies to the importance of metrics with respect to measurements on individual systems.
Without the help of a metric it is also possible to objectify magnitudes. But although numerical values achieved in this way may be convenient in comparing and communicating measurement results, they are useless in a theoretical modal analysis because they can merely describe the order of the subjects, not their modal or typical subjective relations with respect to some attribute.[44]
Time and again
3.12. Absolute and relative space,
time, and motion
It is often said that the shift from the geocentric to the heliocentric world view implies that mankind no longer held the centre of the universe, and had to be content with a more modest position.[45] This is typical hindsight. It was by no means the view of Copernicus, Kepler, and Galileo, who knew the background of ancient and medieval cosmology better than our present-day world viewers. In this cosmology, the central position of the earth was by no means considered important. The earth, including its inhabitants, was considered imperfect, occupying a very low position in the cosmological hierarchy. With the advent of the heliocentric world view, man was not ‘bereft from his central place’, but was ‘placed into the heavens’, the earth becoming a planet at the same level as the perfect celestial bodies.[46]
‘As for the earth, we seek rather to ennoble and perfect it when we strive to make it like the celestial bodies, and, as it were, place it in the heaven.[47]
The Copernican view of the cosmos was of great influence on the concepts of space and time.[48] In Aristotelian physics space is finite, bounded by the starry sphere, but time is infinite. Aristotle recognized neither beginning nor end of the cosmos and this embarrassed his medieval disciples. The Christian world view requires a beginning, the Creation, as well as an end, the return of Jesus Christ to the earth.
In the Aristotelian view of the cosmos determined by the form-matter motif, the earth stood still at the centre of the universe, which as a whole was not very much larger than the earth. Of course, Aristotle knew that the dimensions of the earth are much smaller than those of the sphere of the stars, but the latter was considered to be small enough to take the argument of stellar parallax seriously. When Copernicus introduced the annual motion of the earth, he had to enlarge the minimum dimension of the starry sphere such that the distance between the earth and the sun becomes negligible compared to it. This turned out to be a step towards the idea of an infinite universe. Copernicus, Kepler, and Galileo, however, still considered the cosmos to be spatially finite.[49] Descartes, on the other hand, identified physical space with mathematical, Euclidean space, and therefore took it to be infinite.
In Aristotelian physics the place of an object is its immediate environment.[50] The natural place of the element earth is water, surrounding the earth. The natural place of the sphere of fire is above the sphere of air and below the lunar sphere. The place of Saturn is the sphere to which it is attached, above Jupiter’s sphere and below the starry sphere. Descartes agreed that the place of a body is its environment. On the other hand, he realized that the position of a body can be determined with respect to a coordinate system, and is not in need of material surroundings.[51] He vacillated between the views that motion is relative and that it is absolute.
Inspired by his view on inertia, Newton devoted one quarter of his summary of mechanics to a scholium on space, time, and motion.[52] He did not intend to give definitions of these concepts, ‘as being known to all.’ His first aim was to make a distinction between absolute and relative time. In this context the term relative appears to differ from the now usual one, implying that the unit and the zero point of time are arbitrary. By relative time Newton meant time as actually measured by some clock.
Some clocks may be more accurate than others, but in principle no measuring instrument is absolutely accurate. By absolute time Newton meant a universal standard or metric of time, independent of measuring instruments. No one before Newton posed the problem of distinguishing the standard of time from the way it is measured.[54] It could only be raised in the context of experimental philosophy. After Newton, the establishment of a reliable metric for any measurable quantity became standard practice in the physical sciences.[55]
Aristotle defined time as the measure of change, but his physics was never developed into a quantitative theory of change, and this conceptual definition did not become operational. Galileo discovered the isochrony of the pendulum. Its period of oscillation depends only on the length of the pendulum, and is independent of the amplitude (as long as it is small compared to the pendulum’s length) and of the mass of the bob. Experimentally, this can be checked by comparing several pendulums, oscillating simultaneously. Pendulums provided the means to synchronize clocks.
In 1659 Huygens derived the pendulum law making use of the principle of inertia, but apparently he did not see the inherent problem of time. Like Aristotle and Galileo, he just assumed the daily motion of the fixed stars (or the diurnal motion of the earth) to be uniform, and thus a natural measure of time. But Newton’s theory of universal gravitation applied to the solar system showed that the diurnal motion of the earth may very well be irregular. It is a relative measure of time in Newton’s sense.
The problem of absolute time, space, and motion is most pregnant expressed in Newton’s first law, the law of inertia:
Uniform motion means that equal distances are traversed in equal times. This means that the absolute standard of time is operationally defined by the law of inertia itself. The accuracy of any actual clock should be judged by the way it confirms this law. The law of inertia is a genuine axiom, because there is no experimental way to test it.
However, Newton did not follow this path. The only way he saw to solve the problem was to postulate an absolute clock, together with an absolute space. Newton admitted that the velocity of an inertial moving body can never be determined with respect to this absolute space, but he maintained that non-uniform motion with respect to absolute space can be determined experimentally.[57] He hung a pail of water on a rope, and made it turn. Initially, the water remained at rest and its surface horizontal. Next, the water began rotating, and its surface became concave. If ultimately the rotation of the pail was arrested abruptly, the water continued its rotation, maintaining a concave surface. Newton concluded that the shape of the surface was determined by the absolute rotation of the fluid, independent of the state of motion of its immediate surroundings. Observation of the shape of the surface allowed him to determine whether the fluid was rotating or not. In a similar way, Jean Foucault’s pendulum experiment (1851) demonstrated the earth’s rotation without reference to some extraterrestrial reference system, such as the fixed stars. Both Newton and Foucault supplied physical arguments to sustain their views on space as independent of matter. Descartes’ mechanical philosophy identified matter with space. In his mechanics and theory of gravity, Newton had to distinguish matter from space and time. In the eighteenth and nineteenth centuries Newton’s views on space and time became standard.[58]
Gottfried Leibniz and Samuel Clarke (acting on behalf of Newton) discussed these views in 1715-1716, each writing five letters.[59] Leibniz held that space as the order of simultaneity or co-existence, and time as the order of succession, only serve to determine relations between material particles. Denouncing absolute space and time, he said that only relative space and time are relevant. But it is clear that relative now means something different from Newton’s intention. Apparently Leibniz did not understand the relevance of the principle of inertia for the problem of the metrics of space and time.[60]
The debate focussed on theological questions. For Newton and virtually all his predecessors and contemporaries, considerations of space and time were related to God’s eternity and omnipresence.[61] This changed significantly after Newton’s death, when scientists took distance from theology.[62] This does not mean that later physicists were not faithful Christians. For instance, Michael Faraday was a pious and active member of the strongly religious Sandemanians, but he separated his faith firmly from his scientific work. Natural theology remained influential during the eighteenth and nineteenth centuries, but its focus shifted to biology and geology, and after Newton it had no significant influence on the contents of classical physics.
Leibniz’ rejection of absolute space and time was repeated by Ernst Mach in the nineteenth century, who in turn influenced Albert Einstein, although later Einstein took distance from Mach’s opinions. Mach denied the conclusion drawn from Newton’s pail experiment.[63] He said that the same effect should be expected if it were possible to rotate the starry universe instead of the pail with water. The rotating mass of the stars would have the effect of making the surface of the fluid concave. This means that the inertia of any body would be caused by the total mass of the universe.[64] It has not been possible to find a mathematical theory (not even the general theory of relativity) or any experiment giving the effect predicted by Mach.[65] Mach’s principle, stating that rotational motion is just as relative as linear uniform motion, is therefore unsubstantiated. Whereas inertial motion is sui generis, independent of physical causes, accelerated motion with respect to an inertial system always needs a physical explanation.
Newton treated the metric of time independent of the metric of space. Einstein showed these metrics to be related. Both Newtonian and relativistic mechanics use the law of uniform time to introduce inertial systems. An inertial system is a spatial and temporal reference system in which the law of inertia is valid. It can be used to measure accelerated motions as well. Starting with one inertial system, all others can be constructed by using either the Galileo group or the Lorentz group, both reflecting the relativity of motion and expressing the symmetry of space and uniform time.[66] Both start from the axiom that kinetic time is uniform. In the classical Galileo group, the unit of time is the same in all reference systems. In the relativistic Lorentz group, the unit of speed (the speed of light) is a universal constant. Late nineteenth-century measurements decided in favour of the latter. In special relativity, the Lorentz group of all inertial systems serves as an absolute standard for temporal and spatial measurements.
Time as measured by a clock is called uniform if the clock correctly shows that a subject on which no net force is acting moves uniformly.[67] This appears to be circular reasoning. On the one side, the uniformity of motion means equal distances in equal times. On the other hand, the equality of temporal intervals is determined by a clock subject to the norm that it represents uniform motion correctly.[68] This circularity is unavoidable, meaning that the uniformity of kinetic time is an axiom that cannot be proved, an expression of a fundamental law. Uniformity is a law for kinetic time, not an intrinsic property of time. There is nothing like a stream of time, flowing independently of the rest of reality. Time only exists in relations between events, as Leibniz maintained, although he did not understand the metrical character of time. The uniformity of kinetic time expressed by the law of inertia asserts the existence of motions being uniform with respect to each other. If applied by human beings constructing clocks, the law of inertia becomes a standard. A clock does not function properly if it represents a uniform motion as non-uniform. But that is not all.
Whereas the law of inertia allows of projecting kinetic time on a linear scale, time can also be projected on a circular scale, as displayed on a traditional clock, for instance. The possibility of establishing the equality of temporal intervals is actualized in uniform circular motion, in oscillations, waves, and other periodic processes, on an astronomical scale as in pulsars, or at a sub-atomic scale, as in nuclear magnetic resonance. Besides the kinetic aspect of uniformity, the time measured by clocks has a periodic character as well.[69] Whereas inertial motion is purely kinetic, the explanation of any periodic phenomenon requires some physical cause besides the principle of inertia. Mechanical clocks depend on the regularity of a pendulum or a balance, based on the force of gravity or of a spring. Huygens and Newton proved that a system moving with a force directed to a centre and proportional to the distance from that centre is periodic. This is the case in a pendulum or a spring. Electronic clocks apply the periodicity of oscillations in a quartz crystal.
Periodicity has always been used for the measurement of time. The days, months, and years refer to periodic motions of celestial bodies moving under the influence of gravity. The modern definition of the second depends on atomic oscillations.[70] The periodic character of clocks allows of digitalizing kinetic time, each cycle being a unit, whereas the cycles are countable. The uniformity of time as a universal law for kinetic relations and the periodicity of all kinds of periodic processes determined by physical interactions reinforce each other. Without the uniformity of inertial motion, periodicity cannot be understood, and vice versa.
At the end of the nineteenth century, Ernst Mach and Henri Poincaré suggested that the uniformity of time is merely a convention.[71] We have no intuition of the equality of successive time intervals.[72] This philosophical idea would have the rather absurd consequence, that the periodicity of oscillations, waves, and other natural rhythms would also be based on a convention.[73] More relevant is to observe that physicists are able to explain many kinds of periodic motions and processes based on laws presupposing the uniformity of kinetic time as a fundamental axiom.
In Newton’s work impressed force is the most important concept besides matter. This may be called the strongest rupture with the mechanists, who wanted to explain motion by motion. For Galileo and Descartes, matter was characterized by quantity, extension, shape, and motion.[74] Motion could only be caused by motion.[75] Newton emphasized that perceptibility and tangibility were characteristic of matter as well. The property of matter to be able to act upon things cannot be grounded on extension alone. Newton introduced a new principle of explanation, now called interaction. Besides quantitative, spatial, and kinetic relations, interactions turn out to be indispensable for the explanation of natural phenomena.
Galileo and Descartes showed motion to be a principle of explanation independent of the quantitative and spatial principles. This led them to the law of inertia, now called Newton’s first law. Descartes assumed that all natural phenomena should be explained by motion as well as matter, conceived to be identical with space. Newton relativized this kinetic principle, by demonstrating the need of another irreducible principle of explanation, the physical principle of interaction.[76] However, Newton only made a start. For, as a Copernican inspired by the idea that the earth moves, his real interest was in the explanation of all kinds of motion, including accelerated motion. The full exploration of the physical principle of explanation did not occur during the Copernican era, but in the succeeding centuries.
[1] Pfanzagl 1968, 11: ‘… measurement in classical physics poses no problems comparable to those in the behavioral sciences …’
[2] Carnap 1950, 8-15; Hempel 1952, 54-58; Stegmüller 1969-1970, 17, 27ff.
[3] Because we shall not be concerned with measurements as a human act, we shall, from now on, speak of subjects: the objects of measurement are subjects of physical and pre-physical laws.
[4] ‘Linear order’ would be a better term, cf. Sec. 2.3.
[5] Campbell 1921, Ch. 6; 1928, Ch. 1; Hempel 1952, 59; Suppes 1957, 96, 97; Ellis 1966, 27; Stegmüller 1969-1970, 29ff; Nagel 1960; Bunge 1967a 36; 1967c, 197.
[6] Nagel 1960, 126; Stevens 1960, 144.
[7] Pfanzagl 1968, 29; Stevens 1960, 141ff; 1959, 24-26.
[8] The scratch test is not strictly transitive; cf. Campbell 1921, 128; 1928 7; Hempel 1952, 61; there are more reliable tests. Lacking a metric, neither Mohs’ scale of hardness nor Richter’s scale for earthquakes leads to calculations.
[9] Nagel 1960, 126, 127; Bunge 1967c, 198.
[10] A notable exception is Suppes 1957, 265, 266.
[11] Bunge 1967c, 198.
[12] Reichenbach 1927, 135.
[13] Campbell 1928, 30ff; Poincaré 1906, 22; Menger 1949.
[14] See e.g. Campbell 1928, Ch. 9-11; Margenau 1960, 1950, Ch. 6; 1959, 163-176; Bunge 1967c, 209ff.
[15] Campbell 1928, 24; Hempel 1952, 29-39, 67, 68; Grünbaum 1963, 175, 176; Carnap 1966, 88; Stegmüller 1969-1970, 58, 90ff; Whiteman 1967, 256ff; Bunge 1967b, 149; 1967c, 207ff; Cassirer 1910, 57; Bridgman, in: Henkin 227.
[16] Cp. Campbell 1928, 29; Bridgman 1927, 10, 23; for a criticism of this view, see Hempel 1965, 123ff; 1960; 1966, Ch. 7; Byerly, Lazara 1973.
[17] For a more elaborate discussion, see Helmholtz 1879; Hempel 1952, 62-69; Menger 1959; see also Bunge 1967c, 200ff.
[18] Strictly speaking we compare forces (weights) in a balance. Mass is a numerical projection of physical interaction, see chapter 5, and cannot be measured directly. Cp. Jammer 1961, 105ff.
[19] It is more complicated but not essentially different, if we take into account the accuracy with which we can make replica’s of A and B. A different but equivalent procedure is described by Campbell 1928, Ch. 2, 3; see also Lenzen 1938, 22ff; Suppes 1957, 96ff.
[20] Bunge 1967c, 199.
[21] Mach 1883, 268-269.
[22] Campbell 1921, 134, 142-144; Hempel 1952, 69; for a critical review of derived measurements and ‘operational definitions’ based on them, see Margenau 1960, 1950, Ch. 12.
[23] Hempel 1952, 77, 78; Bunge 1967a 34: 1967c, 200; Nagel 1960, 128; Stegmüller 1969-1970, 47. For instance, Hempel calls ‘hardness’ an intensive property whereas according to our definitions, it is neither extensive nor intensive. Intensive parameters are also called ‘potentials’.
[24] Redlich 1968.
[25] Redlich 1968.
[26] For the following discussion, see e.g. Morse 1964, Ch. 1-6. Another example of an intensive magnitude is the electrical potential difference. The establishment of its metric between c.1780 and c.1850 caused difficulties similar to those encountered in the development of the temperature scale, cf. Stafleu 1978.
[27] Born 1949, 36.
[28] Still another method is Carathéodory’s; cf. Born 1949, 39ff.
[29] This ensures that the temperature difference between freezing and boiling water at standard pressure is still 100 degrees.
[30] Cp. Nagel 1961, 11.
[31] Grünbaum 1968, 14; see Poincaré 1905, Ch. 2; Stegmüller 1969-1970, 18, 35, 86, 98ff. For a critical review of this standpoint, see Nagel 1961, 179ff. Popper 1934, 144, 145 observes that the conventionalist’s concept of simplicity is itself conventional, and therefore arbitrary.
[32] Margenau 1960, 1950, 139; Nagel 1961, 255ff; Lenzen 1938, 19.
[33] Mittelstaedt 1963, 74. In nearly all physical cosmologies designed so far it is assumed that in the neighbourhood of the earth space-time is approximately flat, satisfying the pseudo-Euclidean metric of special relativity theory.
[34] Nagel 1961, 264; Reichenbach 1927, 26; Beth 1950, 122.
[35] Mittelstaedt 1963, 87.
[36] Whiteman 1967, Ch. 5.
[37] Grünbaum 1963, Ch. 2(A).
[38] Grünbaum 1963, 74.
[39] Grünbaum 1963, 77; on page 75, Grünbaum admits that ‘… it is a highly fortunate fact and not an a priori truth, that there exists a time metrization at all in which all accelerations with respect to inertial systems are of dynamic origin, as claimed by the Newtonian theory …’ See also Grünbaum 1968, 59 ff.
[40] This is even more striking in the examples given by Hempel 1952, 73, 74, and Stegmüller 1969-1970, 73, who discuss a time scale based on the pulse beat of the Dalai Lama or the governing president of the United States, respectively. The outcome of any experiment as described above would depend on the momentary health of these dignitaries. See also Reichenbach 1927, 20, 21, 24.
[41] Reichenbach 1927, 20, 21.
[42] Reichenbach’s (1927, 27) distinction of ‘universal’ and ‘differential’ is erroneously reduced to that between geometry and physics.
[43] Reichenbach 1927, 17.
[44] Cp. Campbell 1921, 132-134.
[45] e.g., Kuhn1957, 3; cf. Burtt 1924, 18-20.
[46] Koyré 1961, 114-115; Lovejoy 1936, 101-108.
[47] Galileo 1632, 37.
[48] On Descartes’ and Newton’s concepts of time and space, see Koyré 1957; 1965, 79-95; Jammer 1954; Burtt 1924, Ch. 4, 7.
[49] Galileo 1632, 319-320 observes that there is no proof that the universe is finite. Aristotle’s assumption that the universe is finite and has a centre depends on his view that the starry sphere moves.
[50] Aristotle, Physics, IV, 2, 4.
[51] Also Galileo was aware of the principle of a Cartesian coordinate system, see Galileo 1632, 12-14.
[52] Newton 1687, 6-12.
[53] Newton 1687, 6.
[54] Landes 1983. During the Middle Ages, the establishment of temporal moments (like noon or midnight, or the date of Eastern) was more important than the measurement of temporal intervals, which was only relevant for astronomers. Mechanical clocks came into use since the thirteenth century, with a gradually increasing accuracy.
[55] Kant 1781-1787, A 19 ff, B 33 ff recognized its relevance.
[56] Newton 1687, 13.
[57] Newton 1687, 10-11.
[58] Grant 1981, 254-255: ‘Newton’s absolute, infinite, three-dimensional, homogeneous, indivisible, immutable, void space, which offered no resistance to the bodies that moved and rested in it, became the accepted space of Newtonian physics and cosmology for some two centuries.’
[59] Alexander (ed.) 1956; Grant 1981, 247-255. Grant 1981, 250: ‘It was less a genuine dialogue than two monologues in tandem ...’
[60] Cohen, Smith (eds.) 2002, 5: ‘Abandoning Newtonian space and time in the manner Leibniz called for would entail abandoning the law of inertia as formulated in the seventeenth century, a law at the heart of Leibniz’s dynamics.’
[61] Newton 1687, 545-546 (General scholium, 1713); Jammer 1954; Grant 1981, 240-247.
[62] Grant 1981, 255: ‘… scientists gradually lost interest in the theological implications of a space that already possessed properties derived from the deity. The properties remained with the space. Only God departed.’ Ibid. 264: ‘It was better to conceive God as a being capable of operating wherever He wished by His will alone rather than by His literal and actual presence. Better that God be in some sense transcendent rather than omnipresent, and therefore better that He be removed from space altogether. With God’s departure, physical scientists finally had an infinite, three-dimensional, void frame within which they could study the motion of bodies without the need to do theology as well.’
[63] Mach 1883, 279-286; see Grünbaum 1963, chapter 14; Disalle 2002.
[64] Mach 1883, 286-290.
[65] Pais 1982, 288: ‘… to this day Mach’s principle has not brought physics decisively farther.’
[66] In 1831 Évariste Galois introduced a group as a mathematical structure describing symmetries. In physics, groups were first applied in relativity theory, and since 1925 in atomic, molecular, and solid state physics. One of the first text books on quantum physics (Weyl 1928) dealt with the theory of groups.
[67] Margenau 1950, 139.
[68] Maxwell 1877, 29; Cassirer 1921, 364. The uniformity of time is sometimes derived from a ceteris paribus argument. If one repeats a process at different moments under exactly equal circumstances, there is no reason to suppose that the process would proceed differently, and its duration should be the same.
[69] Periodicity is not only a kinetic property, but a spatial one as well, as in crystals. In a periodic wave, the spatial periodicity is expressed in the wavelength, the temporal one in the period, both repeating themselves indefinitely.
[70] In the twentieth century, a second became defined as the duration of 9,192,631,770 periods of the radiation arising from the transition between two hyperfine levels of the atom caesium 133. This number gives an impression of the accuracy in measuring the frequency of electromagnetic microwaves.
[71] Mach 1883, 217: ‘Die Frage, ob eine Bewegung an sich gleichförmig sei, hat gar keinen Sinn. Ebensowenig können wir von einer “absoluten Zeit” (unabhängig von jeder Veränderung) sprechen.’ (‘The question of whether a motion is uniform in itself has no meaning at all. No more can we speak of an “absolute time”, independent of any change.’) Poincaré 1905, chapter 2; Reichenbach 1956, 116-119; Grünbaum 1968, 19, 70; Carnap 1966, chapter 8 poses that the choice of the metric of time rests on simplicity: the formulation of natural laws is simplest if one sticks to this convention.
[74] Galileo 1623, 277-278; Koyré 1939, 179.
[75] Dijksterhuis 1950, 503.
[76] Dijksterhuis 1950, 515.
Chapter 4
The dynamic development
of kinematics
4.1. The irreducibility of the kinetic relation frame
4.2. Kinetic time
4.3. Combining velocities
4.4. Einstein’s critique of Newton’s absolute space
4.5. The interval as an objective kinematical relation
4.6. The special theory of relativity deals with the kinematic relaion frame
4.7. General theory of relativity
4.8. The principle of relativity
Time and again
4.1. The irreducibility
of the kinetic relation frame
Chapter 4 investigates the dynamic development of the kinetic relation frame. Although the emphasis will be on relativity, it starts with the recognition of the irreducibility of this frame to the numerical, spatial, and physical ones.One of Galileo’s greatest contributions to physics, although his own account of it is not quite correct, is the discovery that change of motion – not motion itself – needs a cause. This principle is now known as Newton’s first law of motion: if no net force acts on a body it will not necessarily be in a state of rest, as was the prevailing view since Aristotle, but it will remain in a state of uniform rectilinear motion.
This statement has been criticized. First, one may observe that forces are defined as causes of change of motion, and therefore it is circular reasoning to say that if there are no forces acting on a body there is no change of motion. Secondly, a state of uniform motion depends on the reference system with respect to which the motion is measured. Once again one may speak of circular reasoning. Now, when introducing fundamental, irreducible concepts, circular reasoning is not always avoidable. The problem is not how to get out of the circle, but how to get into it (3.7). Irreducible concepts cannot be derived from already known concepts, but have to be distilled from them, to be disentangled from historically grown views which are partly right, partly wrong. It needs giants like Galileo to perform this taskMoreover, it is not quite correct to say that forces are defined by their static effects or by their effects on motion. This may be called their modal determination: in a purely modal sense forces are defined in this way. But this must be supplemented by the typical manifestations of forces, such as electric, magnetic, gravitational, frictional, and elastic forces. These can be distinguished, although they do not lead to typical motions. It makes no sense to say that a subject under the influence of an electric force ‘moves electrically’, ‘has an electric acceleration’, etc. Nevertheless these forces can be recognized in ways other than their action on moving bodies in a purely modal sense. They can balance each other, such that they are comparable. I shall defer the discussion of this matter until later. In this chapter I shall concentrate on the second problem mentioned above, the relevance of the reference system. This is possible just because of Galileo’s principle. It expresses the mutual irreducibility of the kinetic and the physical relation frames. Forces and other manifestations of physical interaction belong to the latter.The relativity of motion implies that it is meaningless to attach motion to a single subject without reference to some other system. This does not mean that it is merely conventional to choose a reference system, even if dynamical effects are not explicitly mentioned. A notable example is Copernicus’ heliocentric system of planetary motion, which is generally undervalued by conventionalist authors. It is (erroneously) stated that the replacement of the 83 epicycles of Ptolemy’s earth-centred system by 17 epicycles in Copernicus’ theory greatly simplified matters, but nothing else, since, in principle, both should be considered on a par from a relativistic standpoint.[1]
The simplification was not even that great [2] since the predictive power of Copernicus’ model was not better than that of Ptolemy, and therefore Tycho Brahe’s objections against the new theory were sound enough.[3] Copernicus’ theory did not win the battle because of its simplicity or quantitative features, but because it proved to be superior in some qualitative respects. The assumption that the planets move around the sun, not around the earth, is not merely a change of reference system. It enabled Copernicus to solve several problems,[4] such as the problem of why Venus and Mercury are always seen near the sun, and therefore, why these planets’ period of revolution in their deferents is just one year; the problem of why Mars and the other superior planets always show retrograde motion when they are in opposition with the sun, and, therefore, why these planets’ period of revolution in their epicycles is just one year; the problem of why Venus’ appearance is ‘full’ when this planet is far away (small apparent diameter) and ‘crescent’ when it is nearby (large apparent diameter) and showing retrograde motion. The latter argument indicates that Venus moves around the sun, and not in a sphere well below the sun’s sphere, as in Ptolemy’s model.[5] In particular, Copernicus was able to determine the relative distances of the planets, which is impossible in the Ptolemaic system.
Hence, the Copernican system was accepted because it had greater explanatory power than Ptolemy’s. This was the case only after Galileo removed the largest objections against the dual motion of the earth, by introducing the ideas of inertia, relativity of motion, and superposition of motion. These objections were concerned with the fact that the motion of the earth had no consequences for the motion of terrestrial objects.
Why Kepler accepted Copernicanism is quite a different matter. Kepler and Galileo moved along parallel tracks. While Galileo removed the said objections, but remained faithful to uniform circular motion, Kepler came to reject the latter. Ptolemy’s system can be understood as a marvellous attempt to explain celestial motion in terms of simple uniform circular motion. Uniform circular motion was a kinetic principle of explanation introduced by Plato and maintained by Aristotle and all medieval authors, including Copernicus. Eventually Copernicus’ system was replaced by Kepler’s system, which is nearly the final solution of planetary motion as a kinetic problem.[6] Kepler himself was an arduous adherent to the Pythagorean-Platonic tradition, but since he rated Tycho Brahe’s observations higher than any theory, he came to reject circular uniform motion as an irreducible principle of explanation. Because the planets turned out to move in elliptical orbits with a varying velocity in his system, Kepler immediately recognized that the theory required a further explanation: not a kinetic, but a physical one, which was later provided by Newton’s theory of gravitation.[7] Newton replaced circular uniform motion by linear uniform motion as an irreducible principle of explanation.
Newton’s first law is sometimes considered to be a special case of his second law.[8] However, the second law is only valid if taken with respect to inertial systems. A body on which no unbalanced force is acting moves uniformly with respect to an inertial system. Hence, the first law can be understood as an existential statement, stating the existence of inertial systems.T
This implies the discovery that the physical interaction between two subjects is independent of their common uniform motion with respect to some spatial coordinate system, or the temporal moment at which the interaction occurs. This discovery was already made in classical physics, but it plays a far more consequential role in relativity theory.
Time and again
4.2. The uniformity of kinetic time
Classical physics was chiefly interested in so-called particle motion – the relative motion of rigid bodies. Although kinetic relations were described, kinetic subjects were not recognized. Partial recognition came in the various theories of wave motion, but only with the rise of quantum physics were genuine kinetic subjects (wave packets, see chapter 7) considered.
Uniform rectilinear motion is relative. One cannot say that a subject moves, if one does not specify with respect to which other subject it moves. Thus relative, rectilinear, uniform motion is a subject-subject relation. On the law side this relative motion presupposes the uniform flow of time as the kinetic time order. To common view it seems rather obvious that time flows uniformly, e.g., an hour today is just as long as an hour tomorrow. As late as the 14th century, however, the day (the time between sunrise and sunset) was rigidly divided into twelve hours, with the effect that an hour in winter was shorter than an hour in summer in northern countries.[9] Clearly this chronology would not allow of describing kinetic motion as uniform, and it was abandoned when mechanical clocks came into use.
The idea of time flow is rejected by some philosophers,[10] because there is no motion besides the motion of actual subjects. In our view this argument does not hold, because every law is only meaningful if related to subjects. In the same vein, one may also hold that there is no space, because there are only spatial relations between actual subjects. Indeed, nowadays most philosophers and physicists agree that there is no space or time in an absolute sense. Accordingly, in this book the view is defended that the uniform flow of time is a general, irreducible, modal order of time, as such unbreakably connected to subjective, relative motion.[11] On the one side, the uniformity of motion means equal distances in equal times. On the other hand, the equality of temporal intervals is determined by a clock subject to the norm that it represents uniform motion correctly. This circularity is unavoidable, meaning that the uniformity of kinetic time is an unprovable axiom. However, this axiom is not a convention, but an expression of a fundamental and irreducible law.
Relative motion is objectified and measured by the velocity, the ratio of displacement (considered as a vector) and the duration of the motion. The displacement and the duration are connected via their common end points, usually called ‘point events’. Generally, an event is something endowed with typical individuality, but in a kinematic modal sense, it is a coincidence. The fact that events can be preceded and followed by other events refers back to the serial order of earlier and later. There are events which are simultaneous and this fact refers back to the spatial aspect. At first sight it appears possible to order events according to serial and spatial principles in an essentially static pattern of moments, which does not differ in any sense from a quasi-serial order (chapter 3).[12]
It is an empirical fact that a single identifiable subject can be at different places successively, and that different parts of the self-same subject can also occupy the same place at different moments. This is called motion. It leads to a new ordering, one irreducible to the spatial and the numerical, but which presupposes them. Attempts to reduce motion to succession in a continuous or dense point set lead to paradoxes like Zeno’s. Linear motion by a representative point (like a centre of mass) supposes that all positions on the path of motion are traversed successively. Because on a continuous line no spatial point has a unique successor, this motion cannot be reduced to spatial continuity.
The path of the moving subject (which refers back to the spatial modal aspect), the displacement of the subject, and the duration of its movement are objects in the kinetic aspect. The latter two concepts, displacement and duration, should not be confused with relative position and time difference, respectively. Before these static relations can be used in a kinematic context, they must be developed into displacement and temporal duration, respectively. Whereas relative position and time difference relate different subjects, displacement and temporal duration apply to one subject. Displacement and temporal duration are related by the velocity of the movement. The velocity is therefore a numerical objective representation of relative motion. Velocity has a group character in both classical and relativity physics, although the group relations are different in the two theories. This means that point events as common boundaries of the displacement and the duration are second order objects. In particle physics, the path of the motion is usually reduced to the path of the centre of mass of the moving subject, which means that after objectifying kinematic subjects to rigid bodies, a rigid body is objectified to a single point. In field theories this is impossible because the motion of waves is essentially extended.
Hence what is usually called time or duration is but an objective relation in the kinetic modal aspect – a relation giving an objective representation of relative motion. Time receives its serial character because it refers back to the numerical modal aspect, but it is still subjected to the kinetic order of uniformity.
Time and again
4.3. Combining velocities
Above I have argued that the difference between two rational numbers is a subjective relation in the numerical modal aspect, whereas the distance and velocity are objective relations in the spatial and kinetic modal aspects, respectively. If in one of these cases the relation between two subjects A and B is known, as well as the relation between B and a third subject C, would it be possible to find the relation between A and C? In the numerical aspect the answer is yes: if A, B, and C are numbers, then (AC)=(AB)+(B–C). In multidimensional space this simple addition rule is only valid if applied to the coordinates of the points A, B, and C. But generally speaking, the distance AC between the points A and C is less than the sum of the distances AB and BC. Thus the addition rule in the spatial modal aspect differs from the one in the numerical relation frame. What about the addition of velocities?
In classical mechanics the spatial substratum of kinetic motion is an absolute space in which distances retain their original geometric meaning. The time flow, as kinematical order of time, is also considered absolute, and only the numerical time difference or duration on the subject side remains as an objective measure of motion. Accordingly, one may add velocities in the same way as one adds distances in original geometrical space.
At a first approximation this is not too bad. Of course, in many cases original geometric space will approximate kinetic space very well. Specifically, this approximation appears to be valid as long as the relative velocities concerned are not too large (i.e., small compared to the speed of light).[13].
However, experiments such as those of Albert Michelson and Edward Morley in 1887, led to the conclusion that the addition of velocities is not valid if the speed of light is involved. Any velocity added to the velocity of light results in the velocity of light itself. Hendrik Lorentz[14] concluded that distances depend on motion. He tried to explain the phenomenon from the typical structure of matter by reducing the so-called Lorentz contraction of the measuring sticks used in the experiment to an electromagnetic cause. However, in 1905 Albert Einstein showed that this contraction has no dynamical, physical cause, but is entirely of a kinetic nature. But before he could do so, he had to reconsider the 19th-century concepts of absolute space and time (which were derived from Leibniz and Kant rather than from Newton.
Time and again
4.4. Einstein’s critique of absolute space
In classical physics the velocity of some particular moving subject is chosen as a unit, and the time needed to cover the unit of length is the unit of time: time is conceptually measured as a distance. Hence the comparison of two movements is reduced to the comparison of two distances covered in the same time. However, the possibility of measuring distances depends on the end points of the distance to be measured. Therefore, Albert Einstein’s critique of 19th-century kinematics was directed first of all to the use of the concept of spatial simultaneity in kinematics.[15]
In order to fix the velocity of a moving subject as the ratio of traversed distance and time difference, two clocks are required to establish the duration of the motion. These clocks, placed at the end points of the covered path, have to be synchronous. How is this established? There is no other possibility but to send a signal from one clock to the other. But then one has to know the velocity of the signal if one wishes to determine the time difference between its emission and arrival. To measure this velocity two synchronous clocks are needed, leading to a vicious circle.
Einstein proved there is only one way out of this deadlock. Suppose the signal emitted by clock I at time t1 is received at clock II at time t’, and immediately reflected, returning to clock I at time t2. Now the instant t=½(t1+t2) on clock I is defined to be simultaneous with time t’ on clock II.
This at first sight plausible definition is mistakenly called conventional because the signal is supposed to have the same velocity in both directions – and this presupposition cannot be verified in the above-mentioned procedure of synchronization.[16] Actually it is not really very plausible, and, in fact, even contrary to classical mechanics. If both clocks move (with the same velocity) with respect to a third subject, then the velocity of the signal according to classical mechanics is not the same in both directions if measured with respect to this third system. And if we apply this synchronization procedure to two clocks moving with respect to each other, then according to classical mechanics, it is impossible for the signal to have the same velocity in both directions with respect to both clocks. In fact, the absolute, resting electromagnetic ether of the 19th century can be said to be invented to overcome these difficulties. It follows that absolute simultaneity is not valid in kinetic space. Still it is a mistake to call the above-mentioned definition of simultaneity conventional. It is based on the isotropy of kinematical space, which does not permit different velocities of light in different directions.[17]If one wishes to treat kinetic subjects, distance should not be conceived in a static-spatial sense, but must be opened up. Because no signal propagates with an infinite velocity, an immediately resulting distance has no kinematic meaning. 19th-century physics supposed the actual existence of a substantial ether as a physical-spatial substratum of optical and electromagnetic phenomena.[18] In the special theory of relativity Einstein proved that this hypothesis cannot be verified experimentally.
Time and again
4.5. The interval as an objective kinetic relation
Einstein based his theory of relativity on the hypothesis that one singular signal has the same constant velocity (c) with respect to all possible moving systems. It is not necessary that such a signal actually exists. The empirically established fact that the velocity of light in vacuum satisfies the hypothesis is comparatively irrelevant.[19]
In order to achieve this, Einstein had to amend the classical addition formula for velocities. In the one-dimensional case, two subjects moving with velocities v and w with respect to a third subject, have a relative velocity (v–w)/(1–vw/c2), instead of the classical value (v–w). It can easily be proved that (a) this relative velocity is independent of the choice of the reference system (i.e., a coordinate system with a clock), as it should be; (b) a subject moving with velocity c with respect to one reference system does so with respect to all reference systems; (c) no subject can move with a velocity exceeding the value c with respect to any reference system; (d) this expression approximates the classical one if the velocities are low.
In original space the distance d is independent of the chosen coordinate system (2.7). Einstein defined the interval s between two point events at positions (x1,y1,z1) and (x2,y2,z2), and at times t1 and t2, by
Because the velocity of light c must be the same in any reference system, a spherical light wave front emerging from a point source must be spherical in any reference system. This leads immediately to the above formula. Einstein demonstrated that this pseudo-Euclidean metric of four-dimensional Minkowski-space (as it was later called) is independent of the choice of the moving reference system, i.e., the metric is invariant under all transformations of the Lorentz group. However, the distance d and the time difference (t1t2) now depend on the motion of the reference system, and can no longer serve as independent objective time relations. They are replaced by the interval which now serves to objectify kinetic subject-subject relations. The interval itself does not describe motions. It is a relation in the opened-up numerical-spatial substratum of the kinetic aspect.
Three cases can be distinguished: s2 may be negative, positive, or zero. In the first case, s2<0, it is always possible to choose a reference system such that (for the two point events under consideration) t1=t2. This means that with respect to that reference system (and all reference systems having the same velocity), the two events occur simultaneously, but at different places. This interval is now called space-like, because it looks like a distance. In other systems of reference, t1 may be before as well as after t2. It can be shown that in that case no causal relation between the two events can exist, so that the irreversibility as the physical time order is not violated.[20]
In the second case, s2>0, a reference system exists such that the two events occur at the same place (d=0), but at different times. If t1 occurs before t2 in this reference system, t1 occurs before t2 in every other reference system. In this case of a time-like interval, a causal relation between the two events is possible, and their time sequence is independent of the choice of the reference system. This is not the case if the reference system is transformed into one in which the time flow is reversed. This transformation (called time reversal) is kinematically admitted, but should be excluded with respect to physical interactions.[21]
In the third, borderline case, s2=0, the two events may be connected by a light signal. No reference system exists in which either t1=t2 or the two events occur at the same place. But if t1>t2, then this is the case in any other reference system (time reversal excluded).
Hence, according to relativity theory, the numerical and spatial aspects of time do not lose their original meaning, but they lose their absoluteness when they are opened up by the kinematical modal aspect.[22] This applies to the subject side, where time difference and distance are bound together into the interval, as well as to the law side, where the order of before and after and that of simultaneity become relative to motion. In this respect relativity is profoundly different from the classical conception in which the numerical and spatial modal aspects function in closed form with respect to motion.
Time and again
4.6. The special theory of relativity deals
with the kinetic relation frame
Because the velocity of light c occurs in the metric of kinematical space, one may wonder if this metric refers rather to physical space, or perhaps to electromagnetic wave motion. Both questions should be answered negatively. I shall offer three arguments for this view, before presenting a more positive argument for the thesis that the special theory of relativity is purely kinematical.
The occurrence of a ‘typical number’ (c=3x108 m/sec) in the metric is as such of minor significance. If length and width were measured in centimetres, and height in inches, distance would be defined by
in order to arrive at a consistent geometry. The occurrence of the remarkable number 6.25 in this formula, or that of c in relativity theory, could be avoided by the choice of a coherent system of units. The number c occurs in the metric simply as long as the use of metres and seconds is retained. The second is a kinematic objective unit, which in principle could be replaced by a unit related to the metre via the metric of special relativity, assuming c=1. This method has practical drawbacks (the speed of light is difficult to measure), but in the formulas of relativity theory, velocities are often given in proportion to the velocity of light, which may thus be considered the natural unit of speed.
In a purely mathematical analysis of the kinematic relation frame c is the limiting velocity of real moving subjects. A subject moving at higher speed would have imaginary time duration and spatial extension. Such quasi-subjects, baptized ‘tachyons’[23], may have an abstract, modal meaning, but they will hardly be recognizable as abstractions of real, actual subjects.[24] Even an actual light signal never propagates with the velocity c. This is a limiting velocity which would occur in a vacuum. But a vacuum, although nearly approximated in interstellar space, is itself a limiting abstract concept. No spatial realm is really empty, and in any material medium the velocity of light is less than c. The constant c is the limit rather than the velocity of light. In a medium one may find particles moving faster than light in that medium (this is the phenomenon of Čerenkov radiation), but their velocity is still smaller than c. The so-called phase-velocity of a wave packet may be larger than c, but the phase cannot transmit signals, and the so-called group-velocity of the packet, which is identified with the particle’s velocity, is always smaller than c.
The laws of relativity theory have other consequences for a number of physical phenomena which are not necessarily of electromagnetic origin. For instance, all particles having zero rest mass move with the velocity c (in vacuum). This is not only the case with light quanta, but also with the as yet hypothetical quanta of gravitation. Another consequence of relativity theory is that the measured (objective) mean decay time of moving radioactive particles increases as they move faster with respect to the measuring instrument. This time dilation is not only observed in radioactive decay caused by electromagnetic interaction, but also occurs if caused by weak or strong nuclear interaction. The latter cannot be reduced to electromagnetic forces, whereas their velocities of transmission are less than the speed of light. In other words, it might have been possible to discover the laws of relativity theory if one had known only the time dilation of non-electromagnetic phenomena. One could have found these laws if all actually existing signals moved with velocities less than the constant c in the metric.[25] Thus c is not, in the first place, the velocity of light, but rather the velocity of light’s propagation in a vacuum is equal to c due to the typical structure of electromagnetic interaction.[26]
The speed of light as the unit of velocity
Let me now present the positive argument for the thesis that the special theory of relativity is purely kinematical. Section 2.8 showed that the choice of the unit is arbitrary for spatial coordinate systems in the Euclidean metric. However, in order to be able to give transformation rules between the several possible coordinate systems, it must be assumed (as is usually tacitly done) that the same unit of length applies in all coordinate systems. In Galilean relativity the same assumption is made. It is taken for granted that the units of length and of time are the same in all reference systems. This assumption is sufficient to derive the so-called Galilean group of transformations between inertial systems. But the choice of these units, as basic units, should now be scrutinized.
In this context time means kinetic time, determined by the distance covered by a uniformly moving subject. However, in different frames of reference, one has no right to assume that the unit of length will be the same, and, accordingly, that the unit of time will be the same. Moreover, it may be questioned whether length and time should be taken as basic parameters. The parameter which distinguishes the kinematic reference frames is velocity, just as is distance in the spatial case. Apparently, velocity is a derived quantity, for it is defined as the ratio of the covered distance and the corresponding time interval. Because instantaneous velocity can only be approximated in this way, this definition could only be anticipatory. However, kinetic time can only be introduced with the help of a subject moving with constant velocity. Thus it seems appropriate to take velocity as the basic unit in kinematic reference systems, and to demand that the kinematic transformation rules leave the unit of velocity invariant. If this unit is taken to be c, one arrives at the basic hypothesis of Einstein’s theory of special relativity. This hypothesis is again sufficient to find the so-called Lorentz-group of transformations between inertial systems. It is an empirical matter whether the Galilean or the Lorentz transformations are valid – there is no logical ground for this decision.
Einstein’s paper of 1905, in which he published his relativity theory for the first time[27] was divided into two parts. The kinematical part gives all the relevant formulae of relativity theory, which are applied to the electromagnetic problems in the second part. Before Einstein Henri Poincaré came very close to the discovery of the special theory of relativity. He denied absolute space and absolute time, referred to a principle of relative motion and a principle of relativity, and sought for invariant forms of physical laws under transformation. ‘But the existence of the ether is rarely doubted, for, like Lorentz, Poincaré explained by compensation of effects the apparent validity of absolute laws in moving inertial systems and maintained the privileged position of the ether’.[28]
Only Einstein took the decisive step, recognizing that the relativistic effects have a kinematic origin, rather than a physical one. This also applies to Walter Kaufmann’s experimental discovery that the mass of fast moving electrons depends on their speed. By proving the equivalence of mass and energy, Einstein made an end to many speculations about the origin of mass.[29]
Time and again
4.7. The principle of relativity
Besides uniform linear motion a rigid body is able to perform a uniform rotation without any physical cause. The difference is that the latter is possible only with a rigid body (or at least a system whose parts are kept together by an attractive force, such as the planetary system). Uniform linear motion is possible for any subject. In fact, every part of the rotating body experiences centrifugal and Coriolis forces, but due to its internal coherence the force exerted on one part is compensated by that on another part. Therefore, the uniform rotation is a bounded uniform motion, not entirely of a general modal nature, since it depends on the typical structure of the body, giving rise to some internal force.
Both in classical and in special relativity any reference system rotated about a finite angle with respect to an inertial system is itself an inertial system. This has nothing to do with uniform rotational movement, and applies as well to the coordinate systems in geometrical space. Any coordinate system san be rotated about any angle or displaced any distance without disturbing the relative spatial positions of the subjects (2.8). But two different reference systems cannot only be translated any distance or rotated about any angle, they can also move uniformly with respect to each other without generating a fictitious gravitational field which would exist in one system, but not in the other one. On the other hand, in a uniformly rotating reference system fictitious gravitational fields must be introduced. This is the case in special and in general relativity theory, just as in classical mechanics.
Isaac Newton knew very well that it is impossible to derive the existence of an absolute system of reference by considering uniform linear motion alone. As a minimum he thought that all rotations could be established experimentally with respect to this absolute space (the famous pail experiment[30]). His main contemporary opponents, Christiaan Huygens and Gottfried Leibniz, could not refute his arguments, because their arguments were mainly logical rather than physical.[31] It was not until 1883 that Ernst Mach pointed out a flaw in the reasoning: the experimentally established motion does not occur with respect to an absolute reference system, but with respect to the whole of all matter found in space.[32] Later Einstein corrected this view by showing that the rotation occurs with respect to a local inertial system.[33]
Indeed, the distinction between linear motion, as a purely kinetic motion, and rotation, as a movement which anticipates the physical modal aspect, is only comprehensible if the irreducibility of the kinetic modal aspect is accepted as an empirical fact. Newton as well as Leibniz and Huygens were led astray in their judgment of spatial concepts by the supposed reduction of kinetic relations to spatial relations, or conversely, by the inclusion of geometry in kinematics or mechanics. On the one hand, Newton considered the spatial aspect to be subordinated to the mechanical one.[34] This convinced him of the existence of an absolute space – an idea which, he thought, was confirmed by experiments on rotating systems. On the other hand, Huygens and Leibniz tried in vain to reduce the relativity of motion to the relativity of spatial position. Since spatial relative positions are invariant with respect to both translations and rotations of the coordinate system in Euclidean space, they had to assume that not only linear motions but also circular motions ought to be purely relative in a kinematical sense.[35]
The ideas of Huygens and Leibniz were revived into Mach’s principle, which in its original form stated that any kind of inertia is caused by the mutual interaction of matter. It is now generally (though not unanimously) rejected, because it turns out to be very difficult to develop this principle into a satisfactory mathematical theory.[36] The principle of relativity, as formulated by Einstein, is more restricted than Mach’s principle.
If we make a transition from one inertial system to another, the interval remains the same. Therefore, the interval is called an invariant. The components (x1x2) etc. of the interval are changed, however. We call a certain variable (depending on x, y, z, and t) a covariant if it transforms at the transition in the same manner as the four components of the interval.[37] Every mathematical expression or quantity referring to the physical aspect should be either an invariant or a covariant, because of the mutual irreducibility of the kinetic and physical aspects. This is Einstein’s principle of relativity, expressed in terms of the philosophy of the cosmonomic idea. It is the same requirement of objectivity as discussed in section 2.8. The electric charge, the internal energy or rest mass, and the entropy of an isolated system are invariants. The total energy plus the momentum, and the electric plus the magnetic field strengths in a point, are covariant variables.
The principle of relativity is based on the mutual irreducibility of the kinetic and the physical relation frames.[38] Thus far the principle of relativity was mainly treated considering the subject side, but it has also bearing on the law side. The formulation of physical laws must be frame-independent, whereas the subjective initial and boundary conditions depend on the choice of frame.[39] In both cases the same thing is meant.
Because the pre-physical modal aspects are irreducible to the physical one, they can be used to objectify the latter. But in order to make full use of this possibility due account should be given of that irreducibility. The laws of physics must be independent of time, position, and motion. This implies, e.g., that if one considers different sets of subjects with similar subject-subject relations, one must have similar experimental results. This is the basis of objective experimental research, which must arrive at results, reproducible at any place, at any time, and at any velocity.
In its turn, the frame invariance of physical interaction gives rise to the conservation laws of energy, linear and angular momentum, and of the motion of the centre of mass, for isolated systems. Hence, these laws are related to the irreducibility of the physical modal aspect and the aspects of number, space, and motion (chapter 5).
Time and again
4.8. General theory of relativity
After the dynamic development of the kinetic relation frame and the preceding ones in the special theory of relativity, in the general theory the kinetic frame is in turn opened up by the physical one.
Both in classical mechanics and in special relativity theory purely kinetic uniform motion is rectilinear. This is related to the assumption that the spatial substratum is (pseudo-)Euclidean. However, according to Einstein’s general theory of relativity, the physically relevant spatial substratum is not Euclidean, but is determined by the temporal and spatial distribution of energy. With respect to this non-Euclidean space, kinetic motion occurs along a so-called geodesic, which is the equivalent of a straight line in Euclidean space.[40] Light travels along a geodesic just like a freely falling massive subject. According to Einstein, the local inertial system serves as a substratum for physical interactions.[41] Especially, Newton’s third law of action and reaction is supposed to be valid (in a static situation) only if referred to a local inertial system. All available local inertial systems are very much alike the pseudo-Euclidean reference systems of special relativity theory. Usually the latter are taken as the pre-physical substratum of physical interactions, except when large-scale phenomena are studied.[42]
The non-Euclidean character of the metric as determined by the energy distribution affects not only the spatial part of the metric, but also the quantitative one. The metric does not only refer to space, but to the whole numerical and spatial substratum of the kinetic relation frame, now in its opened-up form. If the time flow is supposed to be uniform with respect to this reference system, it is no longer uniform with respect to an Euclidean reference system.
If kinetic motion is described with respect to an Euclidean reference system, one explains that this motion is not uniform by introducing a field of gravitation. A (non-Euclidean) reference system in which no gravitation occurs is called an inertial system. A non-inertial reference system has accelerated motion with respect to an inertial system. For example, the reference system connected with an artificial earth satellite is an inertial system in which the gravitational field of the earth has been transformed away. This can only be achieved locally, because it is impossible to find a universal inertial system – i.e., a system with respect to which any gravitational field wherever is transformed away.
Using a reference system moving non-uniformly with respect to a local inertial system, a gravitational field is experienced, such as feeling an extra weight in an elevator accelerating upwards. A physically more important example is uniform rotation. If the earth is considered as a reference system then part of its experienced gravitational field is caused by the earth’s rotation and gives rise to centrifugal and Coriolis forces. These forces are sometimes called fictitious because they have no physical origin, but a kinetic one. According to Einstein such a fictitious field cannot be distinguished from the gravitational field determined by the spatial energy distribution (principle of equivalence).
This is not completely correct, however, because in contrast to the latter gravitational field, a fictitious field can be transformed away everywhere. This means that only locally the two fields cannot be distinguished.[43] Moreover, the fictitious field has no physical source, but has a purely modal kinetic origin and meaning, whereas the energy distribution, the source of a real gravitational field, is always connected to some typical structures, by which it can be identified. In extended freely falling systems (like the earth in the gravitational field of the sun and the moon) one has detectable differential gravitational forces, like those giving rise to the tidal motions of the seas. The existenceof a uniform homogeneous gravitational field throughout the universe can be ruled out by symmetry arguments (isotropy of space). Thus we find that real forces (including the gravitational force) are expressions of physical subject-subject relations, which cannot be said of fictitious forces. In particular, fictitious forces are not subjected to Newton’s third law of motion, the law of action and reaction. Because a fictitious force has no physical origin, there is no reaction force.Einstein had two starting points for the general theory of relativity. (a) Newton’s theory of gravitation implied immediate action at a distance, and is therefore incompatible with relativity theory.[44] (b) Gravitation is a universal interaction,[45] and must also be applicable to systems not endowed with mass, e.g., light signals. The second point means that the general theory of relativity should be considered a modal theory. All free physical subjects, irrespective of their typical structure, move uniformly in a local inertial system, or they are influenced by the corresponding gravitational field in a non-inertial reference system in exactly the same way. This is sometimes expressed by the equivalence of gravitational and inertial mass. However, the concepts of gravitational and inertial mass do not apply to light signals which also move along geodesics.[46] On the other hand, it is impossible to transform away an electromagnetic field, which has a typical structure. For example, the influence of this field on the motion of a subject depends on the ratio between its charge and rest mass, that is, on its internal typical structure.
[1] Margenau 1950, 96, 97; 1960; see also Reichenbach 1927, 211, 217-219. Reichenbach is more cautious than Margenau 1960, but he is mistaken when he prefers the Copernican system to the Ptolemaic one, because the former has a dynamic explanation. Such an explanation is only possible with Kepler’s system. Mach 1883, 279, 283-284 does not discuss the epicycle theory. He merely states that the Ptolemaic and the Copernican modes of view are equally correct. Only the latter is more simple and more practical. ‘The universe is not twice given, with an earth at rest and an earth in motion, but only once, with its relative motions alone determinable.’
[2] Dijksterhuis 1950, 325 observes that the introduction of the earth’s motion about the sun could not make more than five epicycles superfluous, and Kuhn 1957, 171 states: ‘Judged on purely practical grounds, Copernicus’ new planetary system was a failure; it was neither more accurate nor significantly simpler than its Ptolemaic predecessors …’. See also Kuhn 1957, 168; Koestler 1959, 194, 195, 579, 580; Koyré 1961, 43; Feyerabend 1964; Gillispie 1960, 24-26; Toulmin, Goodfield 1961, 175, 179.
[3] Dijksterhuis 1950, 332ff; Kuhn 1957, 200; Feyerabend 1962, 260, 261; 1978 40 ff; Toulmin, Goodfield 1961, 184ff; Hanson 1973, 171-249.
[4] Copernicus 1543; Dijksterhuis 1950, 321-324; Kuhn 1957, 171-180; Koyré 1961, 45ff, 129; Toulmin, Goodfield 1961, 172-173; Hesse 1974, 232; Lakatos 1978, 168-189.
[5] Actually, the fact that Venus’ brightness does not vary appreciably during its motion around the sun was used as an argument against the Copernican theory by Osiander in his Preface to Copernicus’ work (see Copernicus 1543, 22). By pointing to the phases of Venus, Galileo turned the argument in favour of Copernicus’ system. See Galileo 1632, 334-339; Feyerabend 1975, 109-111; Kuhn 1957, 222-224.
[6] Kuhn 1957, 209ff; Dijksterhuis 1950, 335-357; Koestler 1959, 227-427; Koyré 1961, 117-464; Toulmin, Goodfield 1961, 198ff.
[7] Kuhn 1957, 153, 245, 252; Toulmin, Goodfield 1961, 201.
[8] See e.g., Mach 1883, 171-172. This gives rise to the misunderstanding that inertia is due to the mass of the subject. But Newton’s first law does not contain any reference to the mass of the subjects concerned. See also Dijksterhuis 1950, 519f.
[9] Whitrow 1961, 175.
[10] Part II in Gale (ed.) 1967.
[11] The distinction of uniform time flow as a law, and uniform motion as a subjective time relation should not be confused with Newton’s distinction of absolute and relative time (see Newton 1687, 6).
[12] Gale (ed.) 1967, 66.
[13] Analogously, one may add geometrical distances in the same way as numerical differences if the three points concerned are situated on or near a straight line.
[14] Lorentz 1895, 1-7.
[15] Einstein 1905a; 1921.
[16] Reichenbach 1927, 123-129; Grünbaum 1963, 342-268, 666-708; 1968, 295-336.
[17] Bunge 1967a 187-188; Whiteman 1967, 36; Sklar 1974, 287-294.
[18] See Doran 1975; Goldberg 1970 ; Hesse 1961; Hirosige 1976; Schaffner 1972; Swenson 1972; Whittaker 1910, 1953.
[19] It seems that Einstein developed his special theory of relativity without having knowledge of the experiments of Michelson and Morley. See Shankland 1963, 1973; Bunge 1967a 193; Holton 1973, 261-352. For an opposite view, see Grünbaum 1963, 377-386, 834-837. See also Gutting 1972; Swenson 1972; Hesse 1974, 246; Williamson 1977.
[20] Bunge 1959a, 65ff; 1967a, 206: ‘… the space of events, in which the future-directed [electromagnetic] signals exist, is not given for all eternity but is born together with happenings, and it has the arrow of time built into it.’ Sometimes two events with a space-like interval are called ‘topologically simultaneous’, where ‘simultaneity’ is the relation of not being connectable by a physical causal chain or signal. See Grünbaum 1960, 410ff; 1963, 28-32, 351; 1968, 22; Reichenbach 1927, 127, 145-147; 1956, 40-41. The relation ‘topological simultaneous with’ is not transitive.
[21] Reichenbach 1956, 42.
[22] Hence I reject the view, expressed by Minkowski in 1908: ‘Henceforth space by itself, and time by itself are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality’, Minkowski 1908, 75; for a criticism of the Minkowski formalism, see O’Rahilly 1938, 404-419, 732-740.
[23] Feinberg 1967; Reichenbach 1927, 147.
[24] It may be called an ‘axiom of identity’ that an identifiable moving subject may be at the same place art different moments, but not at different places at the same moment, see p. 84, 85. Tachyons do not satisfy this axiom.
[25] A similar argument is given by Goldstein 1959, 200: ‘… the transformation properties must be the same for all forces no matter what their origin. The statement ‘a particle is in equilibrium under the influence of two forces’ must hold true in all Lorentz systems, which can only be the case if all forces transform in the same manner.’
[26] For a contrary view, see Bunge 1967a, 182ff, who, e.g., defines an inertial frame of reference as one in which Maxwell’s equations are satisfied. I think Bunge remains too close to the ‘… historical origins of the theory as far as its leading axioms are concerned’ (182), which, of course, are not denied in our treatment. Bunge explicitly states that there would be no basis for the special relativity theory without an electromagnetic field (205).
[27] Einstein 1905a.
[28] Holton 1973, 187; Poincaré 1905, 1906.
[29] Einstein 1905b.
[30] Newton 1687, 10, 11.
[31] Kuhn 1962 72; Jammer 1954, 114ff; Reichenbach 1927, 213.
[32] Mach 1883, 279-286. A critique of Mach’s views does not need to agree with Russell 1927, 17, who says that ‘… the influence attributed to the fixed stars savours of astrology, and is scientifically incredible’.
[33] Eddington 1920, 157-165.
[34] Jammer 1954, 93ff.
[35] Jammer 1954, 114-124. Even Maxwell mistakenly stated: ‘Acceleration, like position and velocity, is a relative term, and cannot be interpreted absolutely.’ Cp. Maxwell 1877, 25, and Larmor’s footnotes on this page.
[36] Mach 1883, 286-290; Reichenbach 1927, 210-218; Graves 1971, 298-305; Jammer 1954, 139-141, 190-196; Grünbaum 1963, 418-424; Bunge 1967a 134; Mittelstaedt 1963, 81ff; Sklar 1974, 157-234; Whittaker 1953, 168, 183; Nagel 1961, 203-214.
[37] This is somewhat loosely expressed. We shall not bother with the further distinction of covariant and contravariant magnitudes (cf. Landau and Lifshitz 1970, 26).
[38] For a contrary view, see Bunge 1967a 183: ‘The principle of relativity is, in short, (a) a heuristic principle and (b) a metalaw statement – and a normative one not a declarative metanomological statement, for it does not say what is but what ought to be the case.’
[39] Houtappel et al. 1965, 596; Bunge 1967a, 86, 87. By the distinction of physical laws from the initial and boundary conditions we meet ‘Curie’s observation’ that if the world, in all its details, were invariant with respect to displacement there would be no way to distinguish between the two parts. See Houtappel et al. 1965, 596.
[40] On a sphere, being a two-dimensional non-Euclidean manifold, a great circle is a geodesic, which may be the shortest as well as the longest connection between two points.
[41] Mittelstaedt 1963, 78. Unfortunately, Einstein once used the name ‘ether’ for this substratum. This is unfortunate, because the latter has nothing in common with the 19th-century ether.
[42] Cp. Whiteman 1967, 179.
[43] Neumann 1932, 205ff; Bunge 1967a 210ff.
[44] Jammer 1954, 171ff; 1957, 257; 1961, 205; Akhieser, Berestetsky 1953, 372ff.
[45] Newton was the first to realize the universality of gravitation by discovering (a) that the force between the sun and the planets, the earth and the moon, and the force causing falling motion, are the same, and (b) that the gravitational force on a subject is proportional to its mass, regardless of its typical structure or composition. See Mach 1883, 229-234, 241.
[46] Bunge 1967a, 207ff; 1967b, I, 400.
Chapter 5
5.1. Isolated systems
5.2. Thermal physics
5.3. Conservation laws
5.4. Force
5.5. Fields
5.6. Current and entropy
Time and again
5.1. Isolated systems
Chapters 2 and 4 discussed general subjective relations, qualified by the modal aspect in question: numerical difference, spatial relative position, kinematic relative motion. To abstract from the typical individuality of things, events, etc., in order to study their modal relations is more difficult in the kinetic and physical modal aspects than it is in the numerical and spatial relation frames.
It turns out that interaction is the general modal physical subject-subject relation. This means that the possibility of isolating systems is limited. In fact, no pair of physically qualified systems is completely isolated. Belonging to the creation implies interacting with every other created thing. Hence the introduction of isolated systems does not seem to be germane to the problem of time. By definition two isolated systems do not interact physically, and so do not maintain a physically qualified subject-subject relation, although they still have pre-physical relations: relative magnitudes, relative positions, and relative motion. On the other hand, if two systems interact it may be difficult to distinguish them from each other. The interaction between two systems may be so strong that they should be considered as one system. Depending on its context, one may speak of a modal physical subject if it can be isolated such that its external interactions are negligible.[1] This does not mean, however, that it loses its individuality as soon as it interacts with another subject, although this may happen. The strength of the interaction will determine whether two subjects can be distinguished from each other. In any case, it appears to be extremely fruitful to speak of the interaction of separate systems, especially if they are isolated except for this particular interaction. In fact, it appears to be a necessary methodological prerequisite for their analysis, both theoretical and experimental.[2]
The study of the abstract general characteristics of a physical subject requires to leave aside its typical structure. This means that the present chapter will not discuss the branches of physics investigating the typical structures of physical subjects: electromagnetism, nuclear and atomic physics, solid state physics, chemistry, etc., and also statistical physics, which studies the behaviour of a large number of interacting systems of a certain kind (presupposing their structural similarity). The present chapter is restricted to interactions in which either the internal state of the system (thermal physics) or the external states (mechanics) are involved in a purely modal way.[3]
The objectification of a physical subject invariably requires use of the concept of a state. In this concept the identity of a system is presupposed, otherwise it would be meaningless to say that a system can be in different states, or can change its state. Strictly speaking a state can only be ascribed to an isolated system, yet it is often possible to speak of the state of a composite system.
In the concept of a state three aspects can be distinguished. First, the state has a specific numerical value for a certain number of physical variables. It is said to be completely determined by a number of variables if all other physical properties of the system can be derived from them. These variables simultaneously determine the state. The number of independent variables necessary to determine the state is the latter’s dimension. Secondly, the state of a system in its spatial relation to other systems can be considered, if they interact statically, i.e., via a field or a force. Finally, the state of motion with respect to some other system may be relevant. In each case the state is changeable.
Among the modal characteristics of physical subjects the concepts of energy, force, and current are the most important. It will be demonstrated that these always refer back to the numerical, spatial, and kinetic relations, respectively. Therefore, their numerical values serve as mathematical objectifications of physical relations. Except for very artificial constructions, physics cannot do without these or equivalent concepts, because energy, force, and current refer back to mutually irreducible modal aspects. On the other hand, these are strongly related because each is a projection of the same physical aspect. In monistic philosophies, such as were popular in the 19th century, this view is unacceptable. However, various attempts to reduce one to the other have always been in vain. In the philosophy of the cosmonomic idea it becomes clear why this is impossible.
Time and again
5.2. Thermal physics
An isolated system is an abstract concept because no concrete physically qualified subject can be completely isolated from other subjects, and because it does not take into account the individual character inherent in any concrete physical system. Nevertheless it is a meaningful concept, since even in experimental physics walls can be devised which are nearly impermeable to energy and matter transport. However, this concept is especially meaningful as a theoretical concept because it allows one to study modal physical laws.
Thermodynamics deals with the modal physical properties of macroscopic bodies. It was developed in the first half of the 19th century by Sadi Carnot, Julius Mayer, James Joule, Hermann Helmholtz, Rudolf Clausius, William Kelvin, and others. In the beginning of the 20th century Constantin Carathéodory investigated its foundations. However, the axiomatic representation of this branch of physics is still a matter of dispute,[4] and I shall discuss its hypotheses without pretending rigour or completeness.
Whereas in mechanics a physical subject can often be objectified by a spatial point (the centre of mass), in thermal physics (which includes statistical physics and thermodynamics) a physical subject has connected and interacting parts. As a first hypothesis it is stated that any isolated system has a macroscopically unique equilibrium state, designated by a limited set of extensiveparameters (3.8), such as its volume (V) and its internal energy (U). Some of these parameters may be determined by the boundary conditions (the volume, or a static field), while others are determined by the internal structure of the system.
To understand the meaning of the extensive parameters, one has to consider some possible interaction, because the state of an isolated system can only have meaning while anticipating some interaction. Suppose two systems A and B are completely isolated. The state of the system AoB consisting of the physical sum of A and B is designated by extensive parameters like volume V and energy U, whose values are the numerical sum of the values for the separate systems:
V(AoB)=V(A)+V(B), U(AoB)=U(A)+U(B)
Provided no chemical reaction takes place, the number of moles (Ni) of each chemical species is also an extensive quantity. If a chemical reaction takes place, the number of moles of each atomic component must be incorporated as well.
Now let the two systems interact with each other. According to a second hypothesis, the decrease of any extensive parameter for A equals the increase of the corresponding parameter for system B. During the interaction, the total volume, energy, and number of moles (or atoms) of any kind, are unchanged. It seems obvious that if the volume of a system increases, the volume of its surroundings must decrease by the same amount. It is not trivial that this also applies to energy. The conservation of energy is a quantitative expression of the physical subject-subject relation.
The first two laws of thermodynamics
The first and second hypotheses imply that the extensive parameters also serve to describe the system when it is not in equilibrium. In this case the description is not unique. Different non-equilibrium states correspond to the set of extensive parameters. A non-equilibrium state of an isolated system can only be described by the way it was prepared (which may include ‘waiting a little’[5]). However, according to a third hypothesis, there exists a mathematical function of the extensive parameters, called the entropy S, which has a definite value for the equilibrium state of the system, and which can be used to describe the development of a system which is not in equilibrium. For the physical sum AoB of two mutually isolated systems A and B, the entropy is the numerical sum of the entropies of each system:
If the two systems interact, the total entropy will stay constant or increase. For two parts of a system, which as a whole is in internal equilibrium, S is an extensive parameter. But with respect to a system which is not in equilibrium, the increase of S is related to the current between parts of the system, and with respect to such parts (or to different non-interacting systems), S determines the ‘generalized force’ or ‘potential difference’ between them. This hypothesis is called the Second Law of thermodynamics.
Energy is always involved in any possible interaction. This means that energy is a relevant state parameter for any thermodynamic system. An interaction in which energy is not involved, would not lead to equilibrium.[6] This fourth hypothesis is usually formulated in the so-called First Law. It states that the energy increase of any thermodynamic system equals the sum of the work performed on the system, and the heat transferred to it. Heat is the product of the temperature of the body and its entropy increase during the heat transfer, and work is related to a change in any extensive state parameter except energy. Work is invariably determined by a change in the boundary conditions. Whether a certain extensive magnitude is relevant to a certain system depends on whether it is possible to perform work on that system by changing that parameter. This means, for example, that the magnetization of a non-magnetic gas is not a relevant state parameter. Thus heat and work are not forms of energy, as is often inaccurately stated, but forms of energy transfer. They are related to currents, and cannot serve as state parameters.
Thermodynamic potentials
The so-called intensive parameters or potentials (3.9), like temperature and pressure, can now be introduced as partial derivatives of either energy or entropy. In both cases the derivatives are taken with respect to the extensive parameters. In these definitions, the temperature T has an exceptional role, because it is defined in two equivalent ways:[7]
either T=∂U/∂S or 1/T=∂S/∂U
Other potentials are defined according to the following alternatives. Let a potential Y or F correspond to the extensive parameter X (X is neither energy nor entropy), then:
either Y=∂U/X or F=-Y/T=S/∂X
The first alternative is called the energy representation, and is older than the second alternative, the entropy representation, which has advantages for the study of currents (5.6). With this definition of the intensive parameters, the First Law of thermodynamics reads:
either dU=TdS+ΣYdX or dS=(1/T)dUFdX
The values of the intensive parameters determine the direction of the interaction. For example, if the temperature of a body A is higher than that of a body B, heat will flow from A to B. If all corresponding intensive parameters are equal for the two interacting systems, A and B are in thermodynamic equilibrium with each other.
In thermostatics (a branch of thermodynamics) the intensive parameters are defined with respect to equilibrium states. In this context it makes no sense to attribute a temperature value to a body which is not in thermal equilibrium. However, if we consider this system as having parts – i.e., as consisting of a large number of small interacting sub-systems, each being near an equilibrium state –the temperature can be said to have different values at different positions within the body. In this case one speaks of a temperature field. This is why the intensive parameters are also called potentials. The spatial gradient of a potential has the character of a force, driving a current. Thus, every extensive parameter determining the state of a system is related to a generalized force and a generalized current.[8] But this is only the case as far as the extensive parameters are related to energy.
State space
If the state of a system is determined by n independent extensive parameters, it can be represented by a point in an n-dimensional state-space, in which the extensive parameters form the coordinates. The coordinate system in this space is not unique. One or more extensive parameters can be replaced by intensive parameters (which is useful because intensive parameters are often easier to measure), or by other extensive parameters such as the free energy or the (free) enthalpy, if the so-called Legendre transformations are employed. The latter parameters are often useful for the discussion of situations with specific boundary conditions, such as the equilibrium state of a system that is not isolated but kept at constant temperature or pressure.[9] The number of dimensions of state-space remains the same in these transformations.
By introducing an additional typical law this number can be reduced. One parameter may be eliminated by introducing an (n–1)-dimensional manifold in state space with the help of a so-called equation of state which relates an extensive parameter to its corresponding intensive parameter. Such an equation depends on the temperature and the typical interaction of the molecules composing the system under study. Thus the ideal gas law relates pressure and volume assuming that the molecules in the gas have no extension and do not interact with each other. The derivation of the specific heat of the gas depends on whether the molecules consist of one, two, or more atoms. Curie’s law for a magnetic gas (relating the magnetization and the magnetic field strength) is derived assuming that its molecules have non-interacting magnetic moments.
These are clearly limiting (‘ideal’) cases, but they are still dependent on assumptions concerning the typical structures and (lack of) typical interactions. Accounting for the extension and the interaction of the molecules in a simplified way, one arrives at the Van der Waals equation of state, which even accounts qualitatively for the condensation of a gas to a fluid. Yet the development of the equation of state is not a purely modal matter, in contrast to the framework of thermodynamics as outlined above.
Time and again
5.3. Conservation laws
The extensive parameters as applied in thermal physics mainly describe the internal state of a system. This section will discuss the relevance of energy for the external state concerning the system’s spatial position and motion. Originally, energy as kinetic and potential energy was only recognized with respect to this external state. The internal state was described by a single variable (mass) for a subject whose extension could be neglected, or by a tensor (moment of inertia) for extended, rigid bodies. The relation between mass and energy was not recognized until 1905.
Mechanics is mainly concerned with the relative motion of material subjects and the simplest interaction, therefore, is one of collision. Mechanists like René Descartes even believed that collisions were the only admissible kind of interaction. One speaks of a collision between two subjects if the interaction can be assumed to be of short duration, and if one’s attention is directed to the consequences of the interaction for the relative motion of the system. Except for this interaction the two colliding systems may be considered to be isolated, and therefore their total uniform motion, as objectified by the motion of their centre of mass, is uniform before and after the collision, and is not influenced by the interaction.
A collision is called elastic if the internal state of both systems is not essentially changed by the interaction. It is called inelastic if the state of motion as well as the internal state of at least one system is changed in a physical sense. This internal change itself cannot be described by mechanics, unless it is assumed (as was done in classical physics) that it can always be explained by collisions between particles composing the system. Macroscopically, the concept of an elastic collision is an abstraction. Even the collision of two billiard balls is partly inelastic. A collision between two molecules is usually called elastic if the collision energy is less than the energy of the lowest excited states of the molecules, but even then the wave packets of the two molecules are reduced (6.3) such that their internal state changes. Nevertheless, the concept of an elastic collision is very useful in studying the changing kinetic state of interacting systems.
The external motion of the two interacting systems, considered as a whole and objectified by the motion of the centre of mass, can be described entirely in kinetic terms. This is impossible with respect to the relative motion of the two colliding systems. Their motion must be described in terms of kinetic energy and linear momentum.[10] In the 17th and 18th centuries people quarrelled about the priority of one over the other.[11] In elastic collisions, neither the total kinetic energy nor the total momentum of the two colliding systems are influenced by the interaction. The gain in energy of one system equals the loss of energy of the other, and the same applies to the momentum. This is no longer the case in inelastic collisions. Whereas the motion of the centre of mass and the total momentum are still uninfluenced, the interaction changes the total kinetic energy.
In the 19th century it was discovered that, in this case, kinetic energy is transferred into another form of energy (e.g., by heating of the colliding systems). It became clear that the total energy (including the internal energy) rather than the external (mechanical) energy must be a constant for a system as a whole.[12] Essentially, this is the content of the First Law of thermal physics. It states that the internal energy can only be changed by an external supply of energy – heat or work. Hence, this law does not say anything about the total energy, but concerns itself with energy differences. It says something about the possible increase or decrease of energy and not about its total value. In the special theory of relativity Einstein showed that the mass of a body is proportional to its internal energy (the famous, but often misunderstood relation E = mc2)[13] if both are determined with respect to a reference system in which the body rests.
Constants of the motion
After the unification of these three concepts – mass, internal (thermal) energy, and external (mechanical) energy – it became possible to achieve a clear understanding of the meaning of the conservation laws. The constancy of energy and linear and angular momentum of an isolated physical system depends on the isotropy and homogeneity of numerical time and space, which are tacitly assumed. By isotropy is meant that there is no preferred direction in space, and by homogeneity is meant that there are no preferred instants of time or spatial positions. Only time and spatial differences count – not some absolute time or position parameter.
It can be shown that the symmetry properties of Euclidean space allow ten constants of the motion: energy; three components of linear momentum; three components of angular momentum; and the position of the centre of mass. Each is related to some group of transformations under which Euclidean space is invariant. In classical physics these are subgroups of the Galilean group, while in special relativity they are subgroups of the Lorentz group. Energy is related to the homogeneity of numerical time; linear momentum is related to the homogeneity of space; angular momentum is related to the isotropy of space[14]; and the centre of mass is related to uniform motion.
This implies that if we find other variables which are constants of the motion, they must be derivable from, identical with, or proportional to the ten constants of the motion (Emmy Noether’s theorem, 1918).[15] For example, a wave packet’s frequency and wave vector are proportional to the energy and momentum, respectively. Planck’s constant functions as the universal proportionality constant and therefore has a purely modal character (chapter 7). In fact, the dependence of the constancy of energy and momentum on the homogeneity of time and space is easier to prove in quantum physics than in classical physics (9.5). The relation of the conservation law of energy with the homogeneity of time also implies that this law is restricted if the isolation of the system has a limited duration. This restriction is expressed in Heisenberg’s relation, ΔEΔt>h (7.6).
In general, the ten mentioned conservation laws are mutually independent. It is only possible to relate them in special cases. For example, a particle with mass m has the energy E and momentum p related either by E=p2/2m or by E=(m2c4+p2c2)½, in classical physics or relativity theory, respectively. For an extended system consisting of point masses between which only central forces are acting, the conservation laws for angular momentum and for the motion of the centre of mass can be derived from the conservation law of linear momentum.
The conservation law of energy has three aspects: conservation of a numerical amount of interaction; the possibility of transfer of energy from one system to another; and the conversion of one kind of energy into another one. There are many different kinds of energy, modal (internal, gravitational, kinetic), and typical (electric, magnetic, nuclear). However, they do not stand in isolation. They can be transformed into each other if the two subjects interact such that heat or work is exchanged, or potential energy is transformed into kinetic energy. For all these different interactions, the universal modal concept of energy allows of comparing the different kinds of energy with each other, and therefore gives a general objective description of them. This gives energy, as the fundamental numerical projection of physical interaction, its status of key-concept in physics.
Time and again
5.4. Force
Force turns out to be a physical concept referring back to the spatial relation frame and being entirely of a modal character. It will hardly be necessary to show that force is an expression of a physical subject-subject relation. Newton’s third law of action and reaction is usually understood in this sense: the force exerted by a physical subject A on a physical subject B equals the force exerted by B on A, but the two forces act in opposite directions. This also applies to thermodynamic generalized forces. Besides forces between spatially remote bodies, we also find forces between the parts of extended bodies (e.g., elasticity).
The static character of force implies that different forces, applied to the same physical subject, can balance each other. This is also possible if the mutually compensating forces are of a different typical nature. For example, an electric force exerted on a charged body can be balanced by the latter’s weight. An electric voltage across a metallic wire can be compensated by a temperature difference, preventing an electric current from flowing, which would otherwise be caused by the potential difference (this is the so-called thermo-electric effect). This property of balancing forces of different typical character allows of measuring them. At the same time it demonstrates the modal, general character of force.[16] Forces must be added in the same way as spatial vectors. This property depends on the independence of forces acting simultaneously on the same subject and is related to the independence of the spatial dimensions.[17]
The three projections of physical interaction, energy, force, and current, are related to each other. The relation of force and energy can be seen in two ways, namely, via the concepts of work (this section) and of potential energy (5.5). If a system on which a force is exerted is displaced in the direction of the force, the latter is said to perform work on that system, which therefore gains energy – e.g., the velocity of the system may increase, because its kinetic energy increases. In mechanics this is expressed in Newton’s second law of motion. This is only a particular example of the relation between force and energy. It has no application in thermal physics. Therefore it is unwarranted to define forces with the help of this law[18] (this use explains why one speaks of ‘generalized’ forces in thermal physics), although as an operational definition it helps to define the metric and the unit for force.
The concept of force as related to accelerated motion became the cornerstone of Isaac Newton’s dynamic theory, and ‘… rose almost to the status of an almighty potentate of totalitarian rule over the phenomena …’[19] in its interpretation along the lines of Roger Boscovich,[20] and Immanuel Kant. In the 19th century people like Ernst Mach[21], Gustav Kirchhoff and Heinrich Hertz[22], who realized the relational character of force, tried to reduce this concept to accelerated motion, whether or not mass was a primitive irreducible concept. They rightly reacted against the many attempts to explain the concept of force (especially gravitational and electromagnetic forces) by appealing to some concealed mechanical action of the ether on moving bodies. In these efforts forces were often treated as substances.[23] These positivist authors may be granted that force is not an irreducible relation, such as space, motion, or interaction. But this spatial projection of interaction cannot be reduced to a kinematic relation.
The identification of force with the product of mass and acceleration is objectionable not only because of static effects, but also because force can be specified (electric force, magnetic force, etc.). Neither acceleration nor mass can be specified in this way. It would also be quite meaningless to introduce the concept of fictitious forces in accelerated frames of reference (4.8), if it were not possible to identify real forces as physical subject-subject relations. Therefore, Newton’s second law is an equation and not an identification. It cannot serve as a definition of mass or as a definition of force. In the heyday of classical mechanics, when only the functional, modal character of force was considered, this could be overlooked. But nowadays we are more aware of the mutual irreducibility of special forces and therefore of the asymmetry in the equation F=ma.[24]
Time and again
5.5. Fields
Another way of relating force and energy conceives of a force as the spatial gradient of potential energy. A force describes the static interaction between two or more spatially remote physical subjects, or within a spatially extended subject. Therefore in many (but not all) cases the concept of a force can be substituted by that of a field. Instead of the force exerted by a subject A on another subject B, we may consider A as the source of a field in which B is situated (and conversely). The concept of a field was introduced by Michael Faraday, William Thomson, and James Clerk Maxwell for electric and magnetic interactions, in order to replace action at a distance by contiguous interaction.[25] A static field enables us to determine the force that the source of the field would exert on another body (a ‘test body’, small enough not to change the field) if it were present at some spatial position relative to the source.
A test body has a potential energy in the field. It feels a force equal to the spatial gradient of the potential energy. If the test body moves from one spatial position to another, its gain in potential energy is equal to its loss in kinetic energy (at least in the simple case of a conservative field in which there is no irreversible energy dissipation). Hence for static situations, a field describes a possible (potential) interaction, and a force describes an actual one. A field is a spatial concept, anticipating the kinetic and physical modal aspects, whereas force is a physical concept, referring back to the spatial aspect.
A field becomes actual only when related to currents, e.g., electromagnetic waves. The description of the electromagnetic field by Maxwell’s equations allowed him to give an electromagnetic interpretation of light waves. This showed the possibility of interaction (the exchange of electromagnetic energy) which could not be described in terms of Newton’s third law. The meaning of this law is also restricted in the theory of relativity, which is concerned with the relative motion of subjects, whereas forces denote static interaction. Therefore, the concept of force will certainly be relativized when the relative motion of interacting subjects is taken into account.[26]
However, this does not imply a loss of meaning, but rather indicates a deepening of meaning. For example, the electric interaction, which is expressed by Coulomb’s law in static cases and involves only static electric forces, becomes electromagnetic interaction as described in Maxwell’s equations, which include magnetic fields. This is the first example of an opened up force. Although magnetic forces may have many characteristics of forces (e.g., they can be balanced by other forces), they lack others (they cannot be considered unequivocally as the spatial gradient of a potential energy). In relativity theory the concepts of electric and magnetic force are united into the concept of an electromagnetic tensor. By a change of kinematic reference frame (a Lorentz transformation) the electric field is transformed into a magnetic one, and vice versa.
Friction is a velocity-dependent force which cannot be considered as the spatial gradient of a potential energy, and cannot be reduced to a field. Friction arises when two systems in contact move with respect to each other, or would do so in the absence of friction. It is subjected to the physical order of irreversibility because it invariably leads to a loss of kinetic energy, which is transformed into thermal energy. But friction has the character of a force, in so far as it can be balanced by other forces. It must be taken into account in the application of Newton’s second law of motion. Friction allows interacting systems to move uniformly in situations where they would accelerate in its absence. Also mechanical equilibrium is only possible because of friction. This applies not only to the motion of falling bodies in the earth’s atmosphere, but also to all moderate currents in thermal physics. In fact, uniform motion influenced by friction is far more common than uniform motion in the absence of any interaction. The latter is an abstraction showing the mutual irreducibility of the kinetic and physical modal aspects, as was first realized by Galileo (5.1). His contemporary opponents, who defended the Aristotelian view that every motion needs a cause, could certainly point to a firm empirical basis. Galileo’s arrived at his view implicitly recognizing friction as a force because he wanted a consistent description of other phenomena.
However fruitful the concept of a field is, it has its modal limitations, and should not be absolutized. Consider two electrically charged subjects. In classical physics each is placed in the field of the other – i.e., each particle experiences the centrally symmetric field of the other. For a third (test) body, however, the field is that of a dipole which is entirely different from a centrally symmetric one. This problem can be evaded by stipulating that no particle feels its own field, but this can only be maintained for a static field. As soon as one allows the particles to move, one is confronted with the unsolvable problem (a problem both in classical and modern physics) that the particle will feel its own field because the velocity of electromagnetic interaction is not infinite. In quantum field theory the attempt to deduce the structure of the electron from the properties of the electromagnetic field leads to an infinite self-energy for the particle, which can only be eliminated by an unsatisfying trick.[27]
Time and again
5.6. Current and entropy
The concept of current or flow is a modal physical concept which refers back to the kinetic relation frame. It is not just a thermodynamic concept because it also occurs in electromagnetic theory, in high energy physics, and in continuum mechanics. Generally speaking, current is a transfer of energy, caused by a generalized force. A heat current is caused by a temperature difference, an electric current by an electric potential difference, a molecular current by a gradient in the chemical potential, and a water current in a river by a gradient in the gravitational potential. Very often, the current has a uniform speed, which means that the driving force is balanced by some kind of friction or resistance, whose strength depends on the velocity of the current.
A current is not merely a displacement of energy. In that case one should also speak of a current if a free subject has a uniform motion. However, the latter needs no cause since there is no force or interaction involved. In a current the retrocipatory kinetic projection of interaction is involved. Work is also included in this general concept of current. Just as with energy and force, currents may be purely modal (work, heat flow, and currents caused by gravitation) or typical (some typical currents are mentioned above). The common, and therefore general, feature of currents is the reference from the modal physical aspect to the kinematic one. Current must be distinguished from accelerated motion, which anticipates the physical modal aspect.
In classical mechanics currents other than work are found only in continuum mechanics. The concept of current depends on the disclosed concept of force, i.e., on a field. The basic equation for the motion of a fluid depends on the conservation of matter which is assumed. This consideration gives the so-called continuum equation which is the starting point for all investigations in this theory. This law is also used in thermodynamics with respect to extensive parameters, and is a direct consequence of the property which gives them the name extensive.
Thus the equation of continuity is not applicable to entropy. As soon as there is some kind of friction or resistance, a current is accompanied by a creation of entropy. Entropy will not increase if there is no current. In the limiting case of the performance of pure mechanical work the increase of entropy is zero, but this is only realized if friction and resistance can be neglected.
One of the reasons why energy and force should be considered the most general retrocipations of physical interaction to the numerical and the spatial modal aspects, respectively, is that different forms of energy can be transformed into each other, and that different forces can balance each other. With currents this is more complicated. In the thermoelectric effect a heat current leads to an electric potential difference, and in the Peltier effect a temperature difference is caused by an electric current. Hence a certain current Ji can not only be caused by the corresponding generalized potential difference dFi, but also by other potential differences, dFj. If these gradients are not too large, the current is proportional to them. Calling the proportionality constant Lij, one finds that
and similar expressions for other currents, Jj. Expressing the potentials in the so-called entropy-representation (5.2) yields Onsager’s relation:
for any pair of currents accompanying each other. Although the thermoelectric effect and similar effects were known for a long time, this general relationship was not discovered until shortly before the Second World War, probably because physicists were used to working with the energy representation in which this relation does not show itself in a simple way.
Currents also play an important role in equilibrium situations – which means that a current is not always caused by a force. For instance, in a container with a liquid and its vapour in equilibrium, the currents of vaporization and condensation are not zero, but equal each other, such that their total effect is zero. Because vaporization is determined mainly by the temperature in the container, and condensation is determined by the number of molecules per unit volume in the vapour, there is a strong relation between the temperature and the vapour pressure.
This dynamic description of equilibrium is also extremely fruitful in other parts of physics. For example, application of equilibrium considerations of this kind to electromagnetic radiation eventually allowed Planck to formulate the first quantum hypothesis.[28]
The creation of entropy is invariably connected with currents. This relation, expressed in the Second Law of thermodynamics, has a purely modal character. But the concept of entropy cannot be grasped completely in a purely modal way. It has a strong relation to the concept of probability, and to the idea of a ground state and excited states of a physical system. Both have a physical character.
The mass of a system can be considered (since the acceptance of special relativity theory) as the modal expression of its internal energy. This internal energy is determined by the typical structure of the system, and, as such, by its internal interactions. The ground state of a system which is its lowest possible internal equilibrium state must be distinguished from its excited states, which have higher energy values, and therefore have higher masses. This was not appreciated before the 20th century because the energy differences in chemical excitations are relatively small and do not give rise to a measurable increase in the mass of chemical substances. Only in nuclear and subnuclear reactions does the mass of interacting systems change appreciably. This accounts for the approximate validity of the law of conservation of mass in most chemical reactions.
The First and Second Laws of thermodynamics deal only with energy and entropy differences and not with total energy or entropy. There is also a Third Law, first formulated by Nernst in 1906, which states that at absolute zero of temperature any conceivable process would leave the entropy constant (at T=0, ∂S/∂X=0 for any extensive or intensive parameter X except energy and temperature). This means, e.g., that at T=0, the heat capacity of any system is zero, which is born out by low-temperature experiments.
This can be interpreted in the following way. Given an isolated system at rest, the equilibrium state at T=0 is the ground state of the system, for which the entropy is arbitrarily set at zero. At higher temperatures the state of a system is an excited state, and corresponds to a positive entropy, which will increase with increasing temperature. Thus entropy is an extensive measure and temperature is an intensive measure of the amount of excitation of a system. Two systems in thermal contact exchange energy until their rates of excitation, as expressed by their temperatures, are equal. In this way, the concepts of entropy and temperature have also significance for microsystems like atoms and molecules.
At first sight, there is no relation between the concept of a current and the idea of a ground state and excited states. But the latter idea has only meaning if applied to a large number of interacting systems (like the molecules in a gas), such that there is a free exchange of energy between the systems concerned. The distribution of energy over the various excited states comes about in a dynamic state of equilibrium, quite similar to that between a liquid and its vapour.
[1] Redlich 1968 defines an ‘object’ (i.e., a physical subject) as anything that can be isolated, and an ‘isolated object’ as one whose properties remain unchanged whatever changes happen in its environment.
[2] Bunge 1959a, 125-134.
[3] This distinction of external and internal differs from another one common in mechanics. The forces between parts of a system are considered ‘internal’, whereas forces from outside the system are called ‘external’. In this case the reaction of the system on its environment is not taken into consideration. Cf. Suppes 1957, 294-298, Maxwell 1877, 2.
[4] Redlich 1968; Bunge 1967c; Noll 1974.
[5] Giles 1964, 17.
[6] Callen 1960, 44.
[7] The partial derivative ∂U/∂S means that other variables are kept constant.
[8] These are called ‘generalized’ because the concepts of force and current are originally defined in mechanics.
[9] Morse 1964, 96; Goldstein 1959, 215-216.
[10] Only in special cases energy and linear momentum are sufficient. In general, angular momentum, an independent conserved property, should also be taken into account, as was first proved by Euler; see Truesdell 1968, 239-243, 260.
[11] Jammer 1957, 165; Mach 1883, 310-314, 360-365; Scott 1970. Cartesians considered momentum or quantity of motion as most important, both in elastic and non-elastic collisions. Leibniz and his followers assumed that atoms were elastic, and hence that vis viva (mv2) was conserved in atomic collisions. Newtonians assumed that atoms were perfectly hard, such that in atomic collisions vis viva was lost. They derived the conservation laws from Newton’s third law.
[12] Helmholtz 1847; Elkana 1970, 1974.
[14] The fact that the conservation law of angular momentum refers back to the spatial isotropy implies that it is not necessarily related to rotational motion, as was assumed in classical mechanics. The latter view created difficulties in understanding the spin of electrons and other elementary particles (9.6).
[15] Jammer 1954, 198; Bunge 1967a, 49. In classical physics mass (the eleventh variable) is also a conserved quantity.
[16] The necessity of introducing a modal concept of force was better understood by Mayer and Helmholtz than by Hertz and Mach, see Whiteman 1967, 398.
[17] Mach 1883, 44ff, 242-243.
[18] Nagel 1961, 185ff; Poincaré 1906, Chapter 6.
[19] Jammer 1957, 241.
[20] Boscovich was the first to realize that the spatial extension of a physical subject is determined by repelling forces; cf. Jammer 1957, 171ff; Agassi 1971, 80ff; Berkson 1974, 25-28; Hesse 1961, 163-166.
[21] Mach 1883, 302-304.
[22] Hertz 1894.
[23] Jammer 1957, 224; Suppes 1957, 172, 297, 298.
[24] In Newton’s formulation of his second law, force is related to a change in linear momentum. One may also relate torque to a change in angular momentum, but this requires an independent law: angular momentum refers to the isotropy of space while linear momentum refers to the homogeneity of space. Torque, as well as force, is therefore a fundamental spatial retrocipation in the physical modal aspect.
[25] Agassi 1971; Berkson 1974. In fluid mechanics, d’Alembert introduced the concept of a velocity field, see Truesdell 1968, 122. Fields were also used to express the forces between the parts of a continuous extended body, e.g., elasticity. The concept of action at a distance, reluctantly introduced by Newton, and criticized by Huygens and Leibniz, is in fact alien to the driving motive of Cartesian physics, which only allowed contiguous interaction between unchanging material particles. Kelvin, Maxwell, and many of their contemporaries tried to save this idea wit the help of a mechanical ether. In Sec. 4.4 we saw that the idea of the ether is now abandoned. A substantial substratum for fields is no longer considered necessary.
[26] Jammer 1957, 254ff.
[27] Weisskopf 1972, 96-128.
[28] Jammer 1966, chapter 1.
Chapter 6
6.1. The direction of time
6.2. The asymmetry of physical time cannot be reduced to probability
6.3. Irreversibility also applies to microprocesses
6.4. Time asymmetry concerns subjects, but is a law
6.5. Interactions
6.6. Initial and boundary conditions
6.7. Causality
Time and again
6.1. The direction of time
In every post-physical modal aspect relations of cause and effect are found which always refer back to the physical modal aspect. The physical cause-effect relation will be discussed in section 6.7. For the present it is sufficient to observe that this relation is subjected to the law that no effect can precede its cause. This universal law is the physical time order of irreversibility. I shall argue that (a) irreversibility is irreducible to the already discussed temporal orders of before and after, simultaneity, and kinetic flow, (b) irreversibility is a universal, modal law, not reducible to laws concerning typical interactions, such as probability laws, and (c) as a law irreversibility is correlated to the physical subject-subject relation of interaction.
The asymmetry of time does not occur in the first three modal aspects, as long as they are not disclosed by the physical modal aspect. The numerical order of before and after, the spatial order of simultaneity, and the kinetic flow of time are symmetrical. For instance, a purely kinetic movement is reversible in time. Reversal of the sign of the time parameter in the mathematical description of the state of motion of a subject yields again a possible motion subjected to the same law. But if a concrete moving subject is considered, and the physical aspect is not neglected, friction must be taken into account. Friction is always present and a motion with friction is not reversible. Due to friction, every changing system will eventually reach a state of relative equilibrium.
I shall distinguish internal (thermal) and external (mechanical) states of equilibrium. The latter depend on friction. An example would be a ladder resting against a house. The friction between the house and the top of the ladder and that between the ground and the bottom of the ladder supply the necessary forces and torques which maintain the ladder in a state of equilibrium, as long as it does not slip. Internal states of equilibrium, according to thermodynamics, can be characterized by a parameter called entropy. Irreversibility, as the physical time order, is expressed by the Second Law of thermodynamics. If two systems, which are both initially in internal equilibrium, interact with each other, then the increase in entropy of one system added to the increase of the entropy of the other system is larger than or equal to zero. This formulation is more correct than the often heard expression ‘the entropy of a closed system cannot decrease’, because the entropy of a system which is not in equilibrium is not well defined in thermodynamics. Besides, our formulation makes explicit mention of the correlation of irreversibility and interaction.
The irreversibility of physical time is not merely an addition to the numerical time order of before and after. Whenever there are several interacting systems, one does not have causal chains with a serial order, but causal networks with at best a quasi-serial order (3.1).[1] According to Hans Reichenbach this means that if a direction is assigned to one causal chain, which connects two systems in a causal network, a direction is determined for each causal chain in the network. This idea of a network presupposes both the numerical order of before and after and the spatial order of simultaneity. Furthermore, the equilibrium state, the final state in any interaction, has a spatial character because it is characterized by a spatial uniformity of some intensive parameter such as temperature or pressure. Finally the increase of entropy, the irreversible process in which the physical approaches equilibrium, reflects a relation between the physical and the kinetic relation frames.
Time and again
6.2. The asymmetry of physical time
cannot be reduced to probability
The irreducibility of the physical order of irreversibility to kinetic motion has not gone unchallenged. A basic motive of classical physics was the reduction of all physical phenomena to reversible motions of particles in a field of force. Physicists attempted, for example, to reduce thermodynamics to statistical physics by explaining the macroscopic laws of the former as the net result of the motions and interactions of molecules, which were assumed to be reversible.[2] Especially Ludwig Boltzmann is considered to have succeeded in deducing the irreversibility stated in the Second Law from a probability calculation on the motion of the molecules composing a gas. In the kinetic theory of gases, Boltzmann showed that the thermodynamic concept of entropy is related to the amount of disorder in the system, and he demonstrated an ordered system to be much less probable than a disordered one. He stated that any closed system will develop from a less to a more probable state, and thought that this explained time asymmetry as a macroscopic statistical phenomenon.
Chapter 8 will offer arguments to support the view that probability concerns the relation between typical laws and individual subjects, and therefore cannot serve as the basis for a modal universal law. At present it may be observed that (a) the mathematical concept of probability does not involve time asymmetry; (b) the realization of a mathematical possibility requires some irreversible physical interaction; (c) therefore, statistical physics has to introduce irreversibility as an independent category in probability theory; (d) this is only possible if irreversibility is correlated to physical interaction; consequently, (e) the alleged reduction of irreversibility to statistical laws is a prejudice.
Boltzmann’s derivation of irreversibility, in fact, presupposes temporal asymmetry between the initial and final states.[3] Or rather, it shifts the problem to the question: Why should probability increase in time? Calling this self-evident would be begging the question, because then the asymmetry of time itself would be self-evident. It is not: the asymmetry of time is an irreducible mode of experience, empirically discovered.
Consider a closed system, in internal equilibrium. The entropy is not completely constant, but exhibits spontaneous fluctuations (for instance, Brownian motion). Because of the individual behaviour of the composing particles of the system the latter can only be said to be near equilibrium. Considering a system during such a fluctuation, one can deduce that after a while the entropy will most probably be larger. But one can also show that the entropy (with the same probability) will have been larger some time before. Consequently, it is impossible to deduce time asymmetry from the increasing entropy of a closed system alone.[4]
To meet this objection, Reichenbach developed a theory of branch systems.[5] If a system is branched off from the universe (that is, if it is isolated physically) its entropy most probably will increase or remain constant. According to Reichenbach this determines the direction of time as that direction in which most thermodynamic processes occur. This is an improvement in so far it makes time symmetry less dependent on the typical properties of the particular systems we study, but still time asymmetry is introduced beforehand in two ways.
First, Reichenbach only considers those systems which branch off and their subsequent development. Secondly, Reichenbach and Boltzmann can prove only that some macrostates of the system are more probable than others. They do not prove that the states of a system are ordered in time according to a monotonically increasing or decreasing probability, such that the direction of physical time can be defined as that of increasing probability. This is a separate statement, not implied in the mathematical concept of probability.[6] As a separate law it is yet another expression of the asymmetry of time.
In statistical calculations one also has to correlate this asymmetry with physical interactions, as can be verified in modern treatments of thermal physics. For instance, the concepts of entropy and temperature may be introduced with the help of a simple system, like a linear chain of spins.[7] For the sake of calculating the entropy etc., one assumes that the spins do not interact with each other. Then it is possible to calculate which macrostates are more probable than others. But, in order to show that a particular state will change such that its probability increases, one has to assume that the spins do interact. Thus the temporal irreversibility cannot be obtained without explicit reference to physical interactions. Therefore, the conclusion is warranted that, even in statistical physics, irreversibility is an independent category, correlated to physical interaction. Irreversibility is an irreducible law, and therefore need not be introduced into physics only via probability laws.
It is a widespread misunderstanding that irreversibility is necessarily connected to probability laws. Sometimes dynamical laws are distinguished from statistical laws by two criteria: (a) dynamical laws are deterministic and operate with absolute certainty, whereas statistical laws are only capable of establishing probabilities (they hold for a great number of individuals and lose their meaning if applied to a small number of them); (b) dynamical laws describe reversible processes and statistical laws deal with irreversible phenomena.[8] However, the first criterion applies to the distinction of modal and typical laws while the second assumes the mutual irreducibility of the kinetic and physical relation frames. This reasoning overlooks the fact that any concrete process can be described statistically, and has both reversible and irreversible aspects. Thus the law describing the motion of a falling body is called dynamical. According to criterion (a) this implies determinism, which can only be maintained if the falling body is not too small, i.e., if Brownian motion and quantum effects can be neglected. And according to criterion (b) this implies reversibility, which could only be true if friction did not exist. On the other hand, heat conduction is supposed to be governed by a statistical law, although if one makes the same approximations as one did with falling bodies the law may be called deterministic. Heat conduction also involves some reversible aspects. For example, in homogeneous media the conductivity is independent of the direction of the current.
The background of this distinction is that every time a philosopher finds a reversible phenomenon he looks for a deterministic interpretation, whereas as soon as he finds irreversibility, he tries to explain it statistically. But this is just narrow-mindedness. The proper way to compare these laws is, either to abstract from concrete reality in order to study merely modal laws, or to study the typical individuality structure of the physically qualified subjects constituting a macroscopic body.
Time and again
6.3. Irreversibility also applies to microprocesses
The assumption that the reduction of thermal physics to statistical physics implied the explanation of irreversibility on a macroscopic scale was based on the hypothesis that the interactions between molecules are completely reversible. Since the rise of quantum physics it is clear that irreversible processes also occur on a microscopic scale. This is very obvious with respect to the spontaneous processes which occur, for example, in radioactive nuclei or activated atoms and molecules, and which always involve the transition from a high energy level to a lower one. Albert Einstein argued that these spontaneous processes must be distinguished from stimulated ones, which are in a sense reversible.[9]
But even the motion of, e.g., an electron can no longer be considered completely reversible. Its relative place and motion are represented by a wave-packet, which is the sum of a number of infinitely extended waves with different wave lengths and amplitudes and mutual phase relations such that the total amplitude of the waves is appreciable only within the wave packet (chapter 7). Outside the packet the composing waves have a resultant zero amplitude. In an interaction such as the collision between the electron and an atom, the electron’s wave packet is reduced to a relatively small size, and after the collision this reduced wave packet will gradually extend.
This expansion is irreversible. From a kinematic point of view, it is quite conceivable that one could obtain a contracting wave packet by reversing the motions of all composing waves (preserving their phase relations), but physically no interaction can be designed for this construction. The production of a wave packet and its subsequent development (which already presupposes time asymmetry) need an explanation which cannot be given in kinematical terms only. They need a physical explanation, irreducible to a kinetic one. The reduction of the wave packet occurs in any interaction of microsystems, not only in interactions of a microsystem with a macrosystem, e.g., in a measuring process. The latter is often suggested by adherents of the Copenhagen interpretation of quantum mechanics.[10] In my opinion, the micro-macrosystem interaction characteristic of the measuring process is merely a special case of physical interaction.
This is not contrary to the existence of the so-called principle of detailed balance in equilibrium,[11] which is closely related to the principle of reciprocity[12] or micro-reversibility.[13] No one doubts the validity of some principle of overall balance as a necessary (though not sufficient) condition for equilibrium, and its value as a guiding principle for research. But often it is interpreted uncritically as implying the time reversibility of any microprocess. This interpretation overlooks the fact that spontaneous processes (for example, Brownian motion) occur in a state of equilibrium. Also, interactions resulting in the expansion of wave packets can be compensated by new interactions without having this process to become reversible. Mutually compensating processes certainly do not need to be each other’s time reverse. They may be completely different processes, provided they compensate each other’s effects.[14] And even if the processes are reversed with respect to their typical structure, they are not necessarily each other’s temporal reverse. It suffices that they are equally probable.
In most cases the principle of detailed balance cannot be proved from first principles. In some cases (namely, those involving magnetic interactions) one can prove that time reversal is not assumed by the principle of detailed balance.[15] In other cases physicists claim that it is. For instance, in classical physics an elastic collision is said to be reversible in time. (In fact, this is only the case if the interacting force is spherically symmetric). This means that, if one would reverse the time parameter (meaning the reversal of all velocities) in a certain state created after the collision, the collision process then proceeds in the reversed time direction, and one returns to the initial state (with reversed velocities).
In quantum mechanics time reversal with respect to a collision means that if an initial state a corresponds with a state b with a certain probability, then an initial state b’ corresponds to a final state a’ with the same probability (the primed state is equal to the unprimed state, but with reversed velocities or wave vectors). Although this is a necessary condition for equilibrium in a gas (for example, if the equilibrium arises by means of elastic collisions among molecules) a statement of this kind is not valid with respect to spontaneous processes.
For instance, in Einstein’s derivation of Planck’s distribution law for black body radiation, he established that the probability for stimulated emission is the same as for stimulated absorption of radiation. But he could only arrive at Planck’s formula when he added a third mechanism, the spontaneous emission of radiation, which has no reverse.[16]
Both in classical and in quantum physics it is time in its kinematical aspect which is reversible in those cases where the principle of detailed balance is applicable. In fact, the principle only states that there are processes which are symmetrical with respect to kinetic time, just as there are atomic and molecular structures which display spatial symmetries. But, as soon as we study the process of interaction itself, irreversibility is unmistakably present.
Time and again
6.4. Time asymmetry concerns subjects,
but is a law
Whoever does not recognize the modal distinction between the kinetic and physical aspects must feel forced to deduce the factual irreversible behaviour of physical systems (including friction) from the statistical result of a large number of reversible processes, that is, finding reversibility on the subject side.
This is also the case in Hans Reichenbach’s theory of branch systems. Even if one assumes that the macrostates of a system are ordered according to a monotonically changing probability, one has to prove that the direction of this change is the same for all physical systems. By relating the branch systems to the universe Reichenbach thinks he is able to determine a universal direction of time, which is not a law, but a factual subjective property of the universe. Thus he has to assume that the universe as a whole has an increasing entropy, and discusses the possibility that the entropy will decrease at some time.[17]
Meanwhile, the neo-positivist Reichenbach cannot avoid to realize that such considerations are rather speculative, for it is hardly possible to say anything meaningful about the properties of the universe (total energy, total entropy),[18] no more than about its relative motion, or spatial position, or even about its number. (Is the universe a member of a class of universes, eventually the only member?) It is very doubtful whether the universe can be treated as a subject with properties like other subjects (Reichenbach states, e.g., that the universe is a closed system, without explaining what this should mean).[19]
Reichenbach’s theory is invalid if it is not assumed that the present state of the universe is a state of low entropy. There are at least three objections to this assumption. First, the entropy of a system can only be defined if the system is isolated and finite, and even if pretending to know what this means with respect to the universe, one would not know whether it is factually the case. Second, the entropy of a system is only defined if the system is in a state of equilibrium, and Reichenbach’s theory assumes that this does not apply to the universe. Third, Reichenbach relates entropy to probability, but his notion of probability ‘… is always assumed to mean the limit of a relative frequency’.[20] Therefore, it does not apply to a single system, such as the universe. What remains of the initial assumption is that the universe is not in a state of equilibrium. I fail to see how any conclusion can be drawn from this negative statement.
The universality of physical time asymmetry does not rest on its relation to the universe, but on its being a modal law, having universal validity. This law does not depend on the empirical fact that all physical processes, which have been observed till now, turn out to develop in one direction of time. One would rather say that the possibility of physical processes depends on the irreducible physical time order which is a basic modal law.
According to Grünbaum,
‘… the complete time symmetry of the basic laws like those of dynamics or electromagnetism is entirely compatible with the existence of contingent irreversibility’.[21]
But he admits that it is rather meaningless to call the laws of motion ‘basic’ and the law of irreversibility a ‘universally valid’ statement about ‘contingent facts’.[22] This artificial construction is invented to reconcile reversibility with irreversibility, by relating the former to laws and the latter to facts. It is very strange indeed, that, e.g., the reversible laws of electromagnetic wave propagation are called ‘laws’, although they are only valid in a very limited case (namely in the absence of any absorbing physical subject), whereas the irreversible law of entropy increase is denied the status of a law, because it is only valid in a very limited case (namely in the absence of spontaneous fluctuations). This strange attitude can only be understood if we keep in mind the basic motive of 19th-century mechanical philosophy.
In contrast to this now outmoded view, I relate reversibility and irreversibility to mutually irreducible modal aspects of temporal reality. Just as static simultaneity is retained in the kinetic relation frame as the borderline case of a state of rest, so reversibility can often be found as a boundary case in the physical modal aspect – for example, electromagnetic wave motion in a vacuum or the motion of bodies when friction can be neglected.
Time and again
6.5. Interactions
Although I do not like to speak about the interaction of a system with the universe, I agree with one point in Reichenbach’s theory of branch systems, namely, the significance of interaction for the physical order of irreversibility.[23] After an interaction with another system, the state of a physical subject will change gradually, until after some time it reaches a state of equilibrium. Thus the first effect of a physical interaction is to disturb the pre-existing equilibrium states of the interacting systems. After the interaction the system approaches a new equilibrium state, i.e., a state of uniform temperature, pressure, chemical composition, electrical potential, etc. This development is irreversible, that is, except for statistical fluctuations, no system in equilibrium will spontaneously move out of equilibrium, anticipating a future interaction with another system. Reichenbach seems to overlook this difference, assuming that the entropy of each branch system increases steadily until it is reunited with the universe.[24] In fact, the entropy of any branch system becomes constant after a while, remaining so until it contacts the universe again. Then the entropy will change, but only after this event, not before.
The error in Reichenbach’s theory is not simply that he begins with the analysis of a single subject (he admits that to be impossible)[25], but that he does not break radically with this method. He tries to deduce the time direction of the universe as a collection of single systems, from the fact that each system alone tends to a state of internal equilibrium. In my view, the analysis of modal physical time has to start with the relations between subjects, especially between pairs of subjects, just as is the case with numerical, spatial, and kinetic time.
Of course, if one begins by stating that in any two interacting systems the entropy never decreases, this fact can be extrapolated to the universe, provided the latter is understood to be as many interacting systems as one likes. Put this way, the statement that the entropy of the universe will always increase is valid though quite useless, whether or not this universe has a finite limit. This is not the case in Reichenbach’s theory which is only applicable to a finite universe. Moreover, his notion of the universe cannot be used in his theory, because it presupposes what he wants to prove. My theory is purely relational, whereas Reichenbach’s requires an absolute universe.
From the fact that a closed system is not in a state of equilibrium it cannot be deduced with certainty that it has interacted with another system some time before: it may be a spontaneous fluctuation. On the other hand, interaction always disturbs equilibrium. Therefore, interaction, rather than the approach to equilibrium, is an irreversible expression of modal subjective physical time.
Time and again
6.6. Initial and boundary conditions
A physical system cannot be described without taking into account its interaction with other systems. Often this interaction can be contained in the so-called boundary conditions. This is the only acceptable interpretation of ‘the universe’: the environment of the system under consideration, i.e., the spatially continuous representation of the physical relation of the subject with all other subjects. The simplest boundary condition is a rigid wall, which must be understood in a physical rather than a spatial sense. It is an infinitely high and steep potential energy. Furthermore one can distinguish thermally conducting walls, movable walls, porous walls, etc. If a system is in an equilibrium state, the latter is largely determined by these boundary conditions.
It is possible to describe the interaction between two systems by assuming that at first they are kept apart by some kind of boundary, which is removed at some later time. In that case the entropy of the combined systems will increase. This has induced Brian Pippard to assume that the entropy is in fact a measure of the constraints on the system. If a system is restricted by some kind of boundary condition from reaching a state that it would have if this boundary condition were absent, then its entropy is relatively low.[26] He admits, however, that this view has a severe disadvantage. It overlooks the transient condition between the removal of the constraint and the subsequent arrival at the new equilibrium state. Suppose the constraint is removed, but reinstated before the system has time to reach equilibrium. Then the system will have an entropy value somewhere between the initial value and the value for the equilibrium state without constraint. Thus the reinstatement of the constraint does not itself decrease the entropy. It just stops the increase of entropy. This implies the impossibility of reducing irreversibility to spatial constraints. As observed in chapter 5, the increase of entropy is invariably related to currents. A constraint (like thermal isolation) can now be interpreted as prohibiting a current (like a heat flow) to occur.
Nevertheless, Pippard is certainly right in pointing to the relevance of the boundary conditions or constraints for irreversible processes.[27] The initial state is also a boundary condition, though not a spatial one. The irreversibility of the physical temporal order makes the initial state relevant, but not the final state. That is, whereas the initial state and the spatial boundary conditions determine physical processes, the final state is merely their effect. Similarly, it is the removal of constraints, not their reinstatement, which leads to a change of entropy.
Dynamical development in phase space
In the context of statistical physics the microstate of a system consisting of many molecules is represented by a point in a phase space. A non-equilibrium macrostate is represented by a small domain in this space and an equilibrium macrostate is represented by a large domain. Specifically, a physical interaction creates a non-equilibrium macrostate which is represented by a relatively simply shaped (e.g., spherical) domain in this space.
This is nicely illustrated in a picture in Reichenbach’s book.[28] During the spontaneous approach to equilibrium the domain is gradually spread out into a very whimsically shaped ‘starfish’ extending through all phase space, although it has the same volume as the original figure.[29] Any microstate of a system is represented by the set of all positions and momentums of all molecules, which is objectified by a point in phase space. That the initial macrostate must be described by a domain in this space, rather than by a point, is due to the fact that no macroscopic interaction is sufficiently accurate to determine the microstate exactly – no boundary can determine a single point.
The production of a macrostate cannot be understood in kinematical terms only. It requires a new mode of explanation, which is the physical one.[30] To delimit a certain region in phase space requires the introduction of constraints to the positions and momentums of all molecules of the system. These constraints cannot determine exactly a point in phase space.
However, a slight deviation of our macroscopic specification of the state will not make much difference to the initial state, nor to the final macrostate. The point is that the starfish representing the set of all microstates, corresponds to the set of all microstates compatible with the initial conditions. But this starfish is itself only a very small subset of the set of all microstates representing the final macrostate, because the latter could also have been reached from other initial macrostates incompatible with the actual initial conditions.
Now suppose one wishes to return from the final macrostate to the initial macrostate by reversing all molecular velocities, which is theoretically possible in a kinetic sense. This means that with the help of some interaction one has to prepare a microstate falling within one of the arms of the starfish. These arms are, however, very thin, because the starfish extends over the whole large domain representing the original final macrostate, but has the same small volume of the domain representing the original initial macrostate. Thus, even a slight inaccuracy in the specification of the reversed final state already means that the process will not end up in the state compatible with the original initial conditions.
This analysis requires to relate the irreversibility to the accuracy with which a microstate can be prepared by some interaction.[31] Quantum physics has shown that this accuracy has a finite limit determined by Heisenberg’s indeterminacy relations. But it is not necessary to appeal to quantum physics. Even in classical physics it is sufficient to state that any physical interaction can only determine a domain in phase space. Increasing the accuracy does not make it possible to reduce a domain to a single point. This is a rejection of the classical mechanist doctrine which holds that a physical state can be represented by a point in phase space.[32]
Time and again
6.7. Causality
The concept of causality is often identified with that of lawfulness, and even with that of determinacy. Therefore, it is discussed with respect to the problem of individuality or the occurrence of stochastic processes.[33] Sometimes, causality is reduced to irreversibility, and conversely, there exist causal theories of time.[34] The law-subject relation and its bearing on determinacy will be discussed in chapter 8. Here I want to comment only on the relation between cause and effect.
It is often stated that this relation is ill-defined in physics. It is often impossible to state unequivocally what is cause or what is effect in processes occurring in closed systems. This is not difficult to understand because a study of closed systems requires an initial interest in the interaction as a subject-subject relation. Furthermore, a distinct cause-effect relation is hardly tenable due to the law of action and reaction.
But if external influences are considered on an (otherwise closed) system the causality concept can still be maintained. Especially in this case, the cause-effect relation is irreversible and asymmetric, as it is always understood to be. One speaks of causality if the state of a system is changed by some interaction. As such the concept of causality refers back to the kinematic relation frame. Hence it is an analogical concept, and, as such, it returns in every modal aspect following the physical one.[35]
In order to make this clear, consider the following example of a closed system consisting of two subsystems in thermal contact. Given the respective temperatures, T1>T2, a heat current J flows from the first to the second system. This system as a whole cannot be analysed in terms of cause and effect, and physicists will always take recourse to object-object relations: the relative energy, the temperature difference, the current. But if one considers the first subsystem, the heat current causes the temperature T1 to decrease, and considering the second subsystem, the heat current causes T2 to increase. Alternatively, one can also consider the thermal contact, and state that the temperature difference causes the current J to flow. Thus at the same time the current can be considered both as cause and as effect. This is possible, because in the cause-effect relation the reaction of the system to the cause of its change is neglected.
Therefore, the causality concept has a limited applicability – namely to cases where internal states can be distinguished from external influences. But this does not mean that it is useless, even in physics.[36] Especially in experimental physics, in which external disturbances are deliberately introduced (or at least must be accounted for) in order to study the way a system reacts to them, one frequently makes use of the causality concept.[37]
The main reason why the cause-effect relation is not very useful in physics is that it is not a very simple relation. It is not a subject-subject relation (the effect is not a subject), nor an object-object relation (it is not a succession of states). Whereas the cause is some interaction (a subject-subject relation) in which the reaction is not taken into account, its effect is objective (the changing state of one of these subjects). Thus a cause-effect relation is a complicated subject-object relation, reducible to the basic subject-subject relation called interaction.
[1] Reichenbach 1956, 36 speaks of ‘lineal order’.
[2] Nagel 1960, 288-312; 1961, 336-345; Reichenbach 1956, 54ff.
[3] See, e.g., the discussions on this subject in Gold (ed.) 1967; see also Landau, Lifshitz 1959, 30; Grünbaum 1963, 240ff; Whitrow 1961, 5ff; Penrose 1970, 41, 42; Schrödinger 1962, 14; Weizsäcker 1971, 233, 240.
[4] Reichenbach 1956, 108ff; Grünbaum 1963, 242; Tolman 1938, 146ff.
[5] Reichenbach 1956, 117ff; see also Grünbaum 1963, 254ff; 1974, 789ff.
[6] Reichenbach 1956, 143.
[7] Kittel 1969, Chapter 4.
[8] Lindsay, Margenau 1936, 201.
[9] Einstein 1917.
[10] Grünbaum 1963, 249; Ludwig 1954, 181.
[11] Tolman 1938, 161ff, 521; Kaempffer 1965, chapter 28.
[12] Kaempffer 1965, 255.
[13] Messiah 1958, 673; Tolman 1938, 163.
[14] Tolman 1938, 114ff, 162: ‘… in general … processes which are the inverse of each other do not exist …’
[15] Kaempffer 1965, 258-261; Messiah 1958, 675.
[16] Einstein 1917; Jammer 1966, 112-114.
[17] Reichenbach 1956, 117ff. Reichenbach’s reference to the ‘universe’ is criticized by Grünbaum 1963, 261ff. See also Dooyeweerd NC, III 629ff; Popper 1959, 196ff calls explanations which depend on a particular improbable state of the universe ‘speculative metaphysics’, see Popper 1974. The attribution of subjective properties like temporal duration and spatial dimension to the universe leads to antinomies, as was first discovered by Kant 1781, A 420-433, B 448-461.
[18] Reichenbach 1956, 132, 133.
[19] Reichenbach 1956, 135.
[20] Reichenbach 1956, 123.
[21] Grünbaum 1963, 277.
[22] Grünbaum 1963, 273. Both Popper and Grünbaum have pointed out that there are physical processes whose irreversibility cannot be reduced to ‘entropic’ irreversibility; see Grünbaum 1964, 1974; Popper 1974.
[23] Reichenbach 1956, 117; Grünbaum 1964.
[24] Especially figure 21, on page 127 of his book, is in my opinion a very inadequate and probably misleading representation of Reichenbach’s own views, and certainly of what actually happens. See also Grünbaum 1974, 789, 794, 795.
[25] Reichenbach 1956, 117.
[26] Pippard 1960, 94ff; the Second Law is formulated as: ‘It is impossible to vary the constraints of an isolated system in such a way as to decrease its entropy’ (ibid., 96).
[27] Brillouin 1964, chapter 6.
[28] Reichenbach 1956, 94.
[29] This is a consequence of Liouville’s theorem. See Tolman 1938, 51.
[30] Reichenbach 1956, 149ff.
[31] Bondi (in Gold (ed.) 1967, 3) interprets this to mean that irreversibility is merely due to the inability of the experimenter to produce microstates.
[32] The probabilistic interpretation of irreversibility also makes use of this fact. The probability of a ‘point-state’ is zero. Only the probability over a domain can have a finite value. Boltzmann’s derivation of his famous ‘H-theorem’, which describes the irreversibility of physical processes, also leans heavily on the characterization of the state by a domain. Brillouin 1964, chapter 1 relates entropy to information, i.e., the experimenter’s knowledge of the system’s initial microstate. It is true, of course, that in information theory entropy and its increase play an important part, but this has a physical basis. It is not the experimenter’s knowledge that counts, but his ability to delimit the domain in a physical sense.
[33] Reichenbach 1956, 55, 149ff; Bunge 1959a, part I; Campbell 1921, 49-57; Braithwaite 1953, chapter 9.
[34] Cf. Whitrow 1961, 175, 217f; Frank 1941, 53ff; Reichenbach 1956, 24ff.
[35] Dooyeweerd NC, I, 558; II, 110.
[36] Margenau 1950, 389ff; 1960, 437; Toulmin 1953, 107ff; Bunge 1959a, 29, 91ff; Nagel 1939, 25f; 1961, 316ff.
[37] Campbell 1921, 53.
Chapter 7
Wave packets
7.1. Relaxation and oscillation
7.2. Waves
7.3. Differential equations
7.4. Superposition
7.5. Energy and momentum
7.6. Heisenberg’s relations
7.7. Interference and Huygens’ principle
7.8. The wave-particle duality
Time and again
7.1. Relaxation and oscillation
Chapter 5 discussed the modal retrocipations of interaction: energy, force, and current. These physical concepts referring to the numerical, spatial, and kinematical modes of explanation, are not unrelated. Currents presuppose forces, and forces presuppose energy. A further complication is that a full account of forces can only be given if one considers energy (and other extensive parameters) in disclosed form, i.e., as potentials. And currents can only be accounted for if one considers forces as fields. Especially in relativity physics, developing kinetic anticipations in the numerical and spatial modal aspects, the purely retrocipatory concepts of internal energy (or mass) and force can no longer be used, and must be replaced by the energy-momentum fourvector and the field. This chapter intends to study the anticipations of the first three modal aspects on the physical aspect more closely. As observed in section 2.2, in order to understand the anticipations, one sometimes requires knowledge of some specific characters, to be discussed more extensively in part II. In the present case, one has to rely on the specific properties of waves and oscillations, in particular those of wave packets.
To start with, numerical time being originally a purely numerical difference between natural or rational numbers becomes continuous when anticipating the spatial modal aspect, and uniform when anticipating the kinematical aspect. With respect to physical interaction, it should be subjected to the order of irreversibility.
If for instance two bodies initially at different temperatures are brought into thermal contact, a heat current will decrease their temperature difference. But the heat current in turn is proportional to this difference so that the current will also decrease. The equalization of the temperature will slow down gradually. This process when compared to kinematic relative motion occurs exponentially. It is described numerically by an exponential function, whose exponent is the kinetic time parameter, proportional to a constant relaxation time.[1] The relaxation time is a measure of the retardation between cause and effect. The value of the relaxation time is determined by the conductance of the thermal contact and the heat capacity of the two systems. Therefore it always has a typical and individual character. But the exponential behaviour itself is independent of the typical individuality of the two systems, and is thus of a modal nature.
The relaxation time is not only found in thermal physics. Relaxation, damping, or absorption occur in mechanics, wherever there is some kind of friction, resistance, or energy dissipation. In unstable atomic and nuclear systems it is expressed in the relaxation or decay of an excited state to the ground state. Relaxation is always related to the transport of energy from one place to another; the transport of energy from one state to another; or the transformation of one kind of energy into another. Relaxation always means the irreversible approach towards an equilibrium state.
Oscillation occurs in a system when the equilibrium state is approached with a velocity proportional to the deviation from equilibrium at an earlier instant instead of at the same moment as in relaxation. In an oscillation the system overshoots the equilibrium state. An example is a pendulum passing its central (equilibrium) position with a velocity nearly proportional to its amplitude. The amplitude of the oscillation will decrease exponentially due to friction. In fact, oscillation will occur only if the friction is not large enough for a simple relaxation process. The oscillatory motion can be described by a harmonic function, i.e., a sine or cosine function, or an exponential function, which now has an imaginary exponent (2.4). Besides the relaxation time describing the gradual decrease of the amplitude, the oscillation time (the period of the oscillation, the inverse of its frequency) occurs as a typical number. It depends on the internal structure of the system.Both oscillation and relaxation can be used as clocks. In the former case one has to compensate for any kind of relaxation, e.g., for friction in a pendulum clock or a watch. Relaxation time itself is used for time measurement in the C14 method of determining the age of archaeological objects.
Hence a physical time scale could be defined, related to the kinetic one by way of an exponential function. The non-linearity of this relation implies that two intervals which are congruent in one of these time scales will be incongruent in the other one. It is, in part, a convention that the kinetic time scale is preferred even in physics, in particular because periodic clocks are much more accurate. But this does not mean that either scale is conventional. In both cases it is required that the kinetic, as well as the physical, temporal relation be properly represented by the scale: the temporal relation must be independent of the typical individuality of the clock by which it is eventually measured (3.11).In a clock based on the physical process of oscillation, the physical aspect of irreversibility is taken care of by the compensation of retardation effects. For a carefully constructed clock the time rate is in accord with the kinetic uniformity of time, the Newtonian metric (3.10). The clock must be synchronous to other clocks, which refers to the spatial order of simultaneity. But essentially, the time is measured in a discontinuous way, because the number of periods is counted. This shows again that time, as this word is usually understood, is numerical time, opened up anticipating the spatial, kinetic and physical aspects.
Time and again
7.2. Waves
The dynamical development of the spatial modal aspect is realized especially in the introduction of the concept of a field (5.5). Fields are intimately related to waves. After Maxwell developed the mathematical theory of electromagnetism, he realized that his equations suggested the possibility of wave motion, which he identified as light. Chapter 2 pointed out that one needs to use spatial objects like points and boundaries in order to analyze spatial relations. It took some time before physicists realized that kinetic objects are needed in a modal description of motion.
Real numbers turned out to be numerical anticipations to the spatial modal aspect (2.3). Real numbers objectify spatial points, real functions of numbers objectify extended boundaries in space, such as lines in a plane or planes in a three-dimensional space. Functions of real or complex vectors also play a role in the anticipations to the physical and kinematical aspects. Especially a kinetic subject can be objectified by a set of functions, more or less in the same way as a spatial figure can be objectified by a set of points.
The points on a spatial boundary are connected by an equation. A function of the form f(x,y): y=ax+b represents the law for a straight line in a plane, as long as the numbers a and b are not specified, whereas for certain values of a and b (e.g., y=2x+3), the equation describes a particular line. From the law we can find a and b if two points on the line (two solutions of the equation) are given. Similarly, the functions in a wave packet are determined by a wave equation on the law side, and by specific amplitudes and phase relations on the subject side.
Wave packets are kinetic, not physical subjects. They can move, but they cannot as such interact with each other. They are subjects in the first three modal aspects because they are countable and have other numerical characteristics, they have extension, and they move. But if electrons collide which each other, they do not do so because they are wave packets, but because they are electrically charged and therefore exert a Coulomb or Lorentz force upon each other. This property of being charged is not included in the wave character of the electron’s motion – it is an additional specific property. Two subjects having the same energy but different (eventually no) charge may have similar wave packets. Thus we find that the wave packet is an objective representation of a physical subject with respect to its motion. As such it is of a general, universal, and thus modal character.
Although wave packets are kinetic subjects because they move, the composing waves are not. These are kinetic objects, necessary for the objectification of kinematical subjects. The composing waves do not move and are therefore not subjects. This situation is parallel to the relation between a spatial figure and the points contained in it. The waves composing a wave packet differ from static functions (which anticipate spatial boundaries) by having a time-dependent phase. This phase is responsible for the interference phenomena, which have their static counterpart in the phenomenon of superposition of spatial functions or fields. The superposition and interference properties of waves anticipating the physical relation frame are very important in the description of the interaction of physical subjects.
Time and again
7.3. Differential equations
The mathematical possibility of describing the motion of a particle with the help of a wave packet was already seen by William Hamilton nearly a century before Louis de Broglie stated his famous hypothesis about the wave character of electron motion (9.1).[2] It is a direct consequence of the application of differential equations to the problem of motion. The law for a certain motion, whether uniform or accelerated, is mathematically objectified in a differential equation[3] whose subjective counterpart is a set of undetermined functions. The equation can only yield a definite solution if some initial or boundary conditions are specified.
This was first recognized by Isaac Newton and Gottfried Leibniz, who independently invented differential and integral calculus in order to be able to study mathematically the motion of material bodies.[4] Thus the mathematical expression of the law of purely kinematical motion, Newton’s first law, is (in Leibniz’ notation) dr/dt=v, wherein rand v are vectors. The solution of this equation, r(t)=r(0)+vt, contains the undetermined parameters, the initial position r(0) and the velocity v. If these are known, the position of the moving body is given for any time. The moving subject itself is assumed not to change and its position is therefore represented by a characteristic point, e.g., its centre of mass. This law is valid for non-interacting subjects provided r and v refer to an inertial system. Differentiating the equation yields d2r/dt2=0 as an equivalent expression of the same law. If now interaction is introduced in the form of a force or field F(r), the second law of motion is d2r/dt2=F(r).
The solutions of these equations do not describe motion itself, but the spatial path of the motion, i.e., a retrocipatory spatial analogy of kinetic motion. But an anticipatory description is required. A function f(r) is a numerical anticipation of spatial figures. Therefore functions of this kind are subjected to a differential equation, rather than point vectors r=(x,y,z). These must be differentiated with respect to all temporal and spatial coordinates (t and r). This can be done in various ways.
For electromagnetic wave propagation in vacuum, Maxwell found the following law as a consequence of his laws concerning electric and magnetic fields:
where f(r,t) represents the electromagnetic field.
The solution of this equation depends on the boundary conditions in a rather complicated way. If there are no boundary conditions specified, any function of the type f(r+ct+φ) is a solution of this equation.[5] This shows that the number c is the velocity of the kinetic subject whose motion is described by the equation. The velocity c belongs to the law of the motion and does not depend on initial or boundary conditions. This wave equation is relativistically invariant, and c has the same value with respect to any inertial reference system. Consequently, the wave equation can only describe the motion of subjects whose internal energy (or rest mass) is zero, i.e. light quanta. A difference with Newton’s law is that f(r,t) does not describe the path of the motion.
Another example is Schrödinger’s equation:
where i is the imaginary unit, and m is a constant, to be identified with the mass of the subject (if it is physically qualified). The solution is of the type f(iωt+ik.r+iφ). The main difference with Maxwell’s equation is that it applies to a physical subject having mass and moving with a low velocity compared to c. It is not relativistically invariant, whereas its solutions are complex functions. ω and k are determined by the boundary conditions. Similarly as with Newton’s equation, other terms can be added describing motion in an external field.
These are not all the possibilities for differential equations describing motion. For instance, the Schrödinger equation can be written in a relativistically invariant form, which gives us the Dirac equation or the Klein-Gordon equation (after Oskar Klein and Walter Gordon). Currents are also subjected to differential equations. In classical physics one distinguished particle motion from a continuous current. The wave theory of motion shows that this distinction is unwarranted. Particle motion also achieves the character of a current when anticipating the physical modal aspect.
On the other hand, any physical current must be quantified when taken in the retrocipatory direction, i.e., when its energy is involved. In so far as the Maxwell and Schrödinger equations do not show damping, they are idealized limiting cases of real physical motion.
Time and again
7.4. Superposition
Maxwell’s and Schrödinger’s equations also differ from Newton’s equation because they are homogeneous and linear. This means that if we have two different solutions, f1 and f2, any linear combination of them (af1+bf2) is also a solution. Here a and b are arbitrary real numbers for Maxwell’s equation, and complex numbers for Schrödinger’s equation. This implies that the solutions can be ordered in a function space, if an operation called the scalar product can be designed (2.4). The basis of this function space depends on the boundary conditions. In the case of unbounded uniform linear motion, the basis functions form a continuous set. This presents a difficulty because these functions cannot directly be normalized, but this problem can be solved, as will be shown presently.
For Maxwell’s equation this basis consists of the functions cos(ωt+k.r+φ) where ω (the angular frequency) and k (the wave vector) range over all positive and negative real numbers, with the condition ω/|k|=c. For Schrödinger’s equation the basis is formed by the set of complex exponential functions expit+k.r+φ) with the same range for ω and k, but without the restriction of Maxwell’s equation. In both cases, φ depends on ω and k, but not in a regular way. Because cos(ωt+k.r+φ)=Re expit+k.r+φ) where ‘Re’ denotes ‘the real part of’, from now on the solutions of the two equations may be considered simultaneously, if, in the case of Maxwell’s equation, we add the prefix ‘Re’ and consider the amplitude A(ω,k) as a real number. In the Schrödinger equation A(ω,k) is complex, its complex conjugate being A*(ω,k).
Because the basis functions form a continuous set, they must not be summed but integrated. Hence any solution of the wave equation can be decomposed into the plane wave functions f(r,t)= ʃA(ω,k)expit+k.r+φ)dωdk, each of which is characterized by a frequency ω and a wave vector k. All functions have these plane waves in common, but different values for A(ω,k) and φ correspond to different functions.
As far as kinematics is concerned, nothing more can be said about the values of the amplitude and the phase. For freely moving, physically qualified systems these values are determined by their latest interaction, i.e., by the way they were prepared before they started their free motion. This preparation can be understood as the result of a collision with another system; by the birth of the system while emerging from another one (as in the case of a light quantum emitted by an atom); or in an instrumental sense, like light or electrons passing a shutter. In all these cases, the particle’s spatial extension will be determined by that interaction, as well as its temporal extension which is the duration of the interaction. This may be the time the shutter was open or the relaxation time of the emitting atom. Immediately after the particle has started its free motion, the temporal extension can be understood as the time needed to pass a certain point. It is related to the spatial extension by means of the velocity of the particle with respect to that reference point. For particles having subliminal velocities, the spatial extension of the wave packet will always increase.
This means that a single plane wave, although it is a solution of the wave equation, cannot serve as a representation of a kinetic subject. The plane wave cannot even be said to move. Because its is infinitely extended, its appearance varies periodically, but there is no displacement. It also cannot be normalized. This is not serious, just because it will not be used to represent a moving subject, and because a wave packet consisting of all plane waves can be normalized by a proper choice of the amplitudes and phases. This is important for the interpretation in which wave packets describe probabilities with respect to future interactions (chapter 8). We should, therefore, consider plane waves as objects in the wave packet.
The wave packet formalism is not only relevant for the motion of a free particle, or of a particle in a field (in which case the Schrödinger equation must be adapted), but also to currents. In fact, Joseph Fourier developed the above theory (named after him as Fourier analysis) while studying the problem of heat conduction, early in the 19th century. Also light quanta are individualized currents in the electromagnetic field (11.4).The velocity of the wave packet is its group velocity, dω/d|k|. Usually, ω and k are not independently variable. For wave packets subjected to Maxwell’s equation, ω=c|k|, hence the group velocity is c, independent of the reference system. If ω is not proportional to |k|, the waves show dispersion. For uniformly moving material particles, satisfying Schrödinger’s equation, ω is proportional to |k|2. Hence, the group velocity (the particle velocity) is proportional to |k|, and depends on the choice of the reference system.
Time and again
7.5. Energy and momentum
Now the set of plane waves serving as a basis for the solutions of the wave equation is by no means unique. There is an infinitude of alternative bases, one of which will be discussed in section 7.7. On the one hand it is a bit unfortunate that nearly always the set of plane waves is taken as a basis, because it suggests that this is somehow intrinsic to kinetic and physical subjects, which is not the case. On the other hand, there are, of course, good reasons to single out this basis. One reason is that exponential functions are rather convenient in a mathematical sense.
However, there is a more fundamental argument. The problem of how to describe uniform motion presupposes the isotropy and homogeneity of space and time. Hence the problem becomes: find a complete set of basis functions reflecting the temporal and spatial isotropy and homogeneity of space and time. Or, in group-theoretical terms: find a suitable representation of the Galileo group (or, eventually, of the Lorentz group). Each of these approaches leads to the set of plane waves, which is therefore the most natural basis for the description of uniform motion. This immediately implies that if space, e.g., is not homogeneous (e.g., in the presence of a central field of force) the plane waves, though still possible, do not necessarily form the most suitable representation of a physically qualified subject. For an atom, for instance, one would prefer spherically symmetrical waves.
Now the energy-momentum fourvector of a freely moving physically qualified subject must be a frame dependent constant because of its assumed temporal and spatial homogeneity (5.3). In a similar way the wave packet’s characteristic frequency and wave vector form frame dependent constants for exactly the same reason. However, a theorem developed by Emmy Noether states that each type of symmetry has one and only one such constant. Therefore, the energy E must be proportional to the frequency f (or ω=2πf), and the momentum p must be proportional to the wave vector k, if these objective properties all refer to the same physical system. The proportionality constant is determined only by the choice of the units for these variables, and is therefore a general, modal, universal, constant of nature. It is known as Planck’s constant, h (or ħ=h/2π). The fact that it has the same value for all subjects anticipates the possibility of all physical subjects interacting with each other.[6] Hence,
ħω=hf and p=ħk=hσ.
These relations (due to Max Planck and Louis de Broglie, respectively) imply that the frequency and the wave vector are connected by hf=ħω=ħ2k2/2m (because E=p2/2m), where m is the, up till now, unspecified constant in the Schrödinger equation (which was designed such as to give this result).[7] The constant m can now be identified with the mass of the subject, whereas its velocity is equal to ħk/m.
The nature of Planck’s and De Broglie’s relations does not imply that energy/momentum and frequency/wave vector are conceptually identical. The former is retrocipatory, the latter anticipatory. But these two directions in the intermodal relationships are always strongly related. The proportionality of energy and frequency means that energy is not related to the amplitude of the waves, as was assumed earlier, so that we have to find another interpretation for the amplitude (9.1).
The proportionality of energy and frequency is sometimes misunderstood as energy-quantization. A light beam of frequency f only has particles of energy hf, which seems to imply discreteness. But the discreteness is due to the starting point (a light beam of frequency f). Energy is a variable which has a continuous spectrum, as does frequency, for freely moving subjects. The energy value in classical physics also has a continuous spectrum. Only in bounded systems, such as atoms or molecules, the internal energy spectrum may become discrete.
There have been some speculations in the literature about a possible quantization of physical space and time.[8] But this supposed quantization must not be misunderstood. It simply means that there is perhaps a smallest distance (called hodon) and a smallest time interval (called chronon) by which one can distinguish subjects and events by physical means. It does not mean that any distance or time interval is just an integral number of this hodon or chronon, respectively. This would certainly lead to antinomies. For instance, the diagonal of a square would be equal to its sides.
Time and again
7.6. Heisenberg’s relations
The shape of the wave packet as determined by its physical preparation is mathematically described by a set of amplitudes A(ω,k), such that the net amplitude is only appreciable within the packet, whereas the composing waves add up to zero outside it. If the relevant extensions are denoted by Δω or Δf, and by Δkxky, Δky, we find by a very general reasoning that ΔfΔt≥1 or ΔωΔt≥2π, and ΔkxΔx≥2π, etc.[9]
This means that although a wave packet can be characterized by a certain frequency f and wave vector k, this is not a precise characterization as in the case of a single plane wave. The spread in the values of r and t determine the spread of k and f. If r and t are quite precisely determined such that the packet is small, so many waves are needed that k and f are ill determined, and conversely. These relations for wave-based signals (as occur in electric communication systems) were already developed by Oliver Heaviside, long before Werner Heisenberg introduced them in quantum physics. They are of a general, modal character, not characteristic of the typical structure of any physical system. In fact, they have a kinetic meaning, anticipating a physical meaning. The shape of the wave packet is determined by some previous interaction, and (because of its probability interpretation) it anticipates a future interaction.
The relations of Planck (E=hf) and De Broglie (p=hk), yield Δpxx>h, Δpy.Δy>h, Δpz.Δz>h and ΔEt>h. These Heisenberg-relations say that the energy and the momentum of a particle are not exactly determined, because of the wave character of their motion.
The fourth Heisenberg relation is sometimes criticized because (in contrast to the other three) it cannot be derived from a relation between non-commuting operators (9.2).[10] However, operators do not play an essential role in the kinematic theory of wave motion and operator calculus is not required to derive the above result.[11]
It will not be immediately clear that the wave description with the inherent Heisenberg relations is also valid with respect to systems with high energies – e.g., fast particles in a bubble chamber, or macroscopic bodies. One must first realize that the spread in the Heisenberg relations is not measured relative to the energy or momentum of the subject itself. Hence the spread of energy of a high energy system can be very large compared to the spread of a mono-energetic electron, and still be extremely small with respect to the total or kinetic energy of the system itself. The former means that the system can be sharply localizable (both temporally and spatially), while the latter means that the subject apparently has a very precise value for its energy, because the spread is so small relative to the total energy. The wave phenomena become determinable only if the spread is comparable to the energy, relative position, etc., of the subject itself. Thus, in principle, a planet’s motion must also be described as that of a wave packet, but, as yet, there are no experiments to show this. Nevertheless, the wave theory is in principle not limited to small subjects, and is therefore of a general, modal character.
On the other hand, it has special consequences as soon as the momentum p, for example, is of the order of Δp. If an electron is restricted to a limited spatial region (e.g., a hydrogen atom) the mean value of p has a smallest value determined by Heisenberg’s relations, and thus a smallest energy. If this spatial region can be extended (i.e., if the electron no longer belongs to one hydrogen atom, but to a molecule of two atoms), the electron can decrease its momentum, and thus its energy. This exchange bonding or covalent bonding explains why hydrogen is a diatomic molecule. By a similar argument it can be explained why an electron cannot exist as an independent particle in an atomic nucleus. Its total energy as determined by Heisenberg’s relations would be more than its rest mass. The much heavier mesons can exist independently in a nucleus for a short period of time.
Another consequence of the Heisenberg relations is already mentioned (5.3). If a system is isolated only during a short time Δt, the conservation law of energy has a restricted validity. Energy is now constant within the limits of +ΔE=+ht. For macroscopic systems, this amount is immeasurable small compared to the total energy E. But this inaccuracy has detectable consequences for some subnuclear processes. If the life time of some excited state is Δt, the energy of this state is only determined within ΔE=ht.
Time and again
7.7. Interference and Huygens’ principle
A wave packet is a superposition of waves with different frequencies and wave lengths (the wave length is inversely proportional to the absolute value of the wave vector). Interference of waves occurs if waves of the same frequency are added, meaning that the amplitudes of the waves are added in the manner of complex numbers, i.e., by taking into account the phase relations. The phenomenon of interference is the basis of Christiaan Huygens’ principle, according to which a propagating wave signal can be decomposed into spherical waves.[12] Every point of space is assumed to be the centre of an expanding wave. The actual motion of the signal is the superposition of all these spherical waves with their different amplitudes and phases, which are thus determined by the initial and boundary conditions, as is the case with the above mentioned plane waves. This illustrates the arbitrariness of the choice of the basis for the decomposition of an actual wave packet.
The spherical waves used in this case are less easy to handle mathematically. For instance, it is difficult to prove that light moves approximately rectilinear and in one direction. This result was only achieved in the 19th century by Augustin Fresnel and Gustav Kirchhoff. This makes it understandable why Isaac Newton’s corpuscle theory of light propagation was favoured above Christiaan Huygens’ theory for over hundred years. Finally, the experiments of Thomas Young and Fresnel proved the possibility of interference which cannot satisfactorily be explained in Newton’s theory. In addition, Armand Fizeau showed that Newton’s theory gave a wrong value for the speed of light in a medium.
The plane wave representation is favoured in the description of pure kinetic motion because it reflects the temporal and spatial homogeneity and isotropy which is assumed. In Huygens’ theory only temporal homogeneity is assumed. Therefore, the frequency is still related to energy and remains invariant. But, because every spherical wave has a singular point, it lacks spatial homogeneity, and therefore there is no relation to linear momentum. As a consequence, Huygens’ representation is especially fruitful for the description of the wave’s interaction with rigid bodies in a spatial sense: reflection against a wall; refraction through a boundary between two media in which the velocity is different; diffraction by a slit in a wall (or a hole, or several slits, or a grid). In all these cases the physical details of the interaction are neglected. Only the change of motion due to the spatial environment is considered, by the study of the effect of the interference of the spherical waves in the neighbourhood of these spatial structures.
Huygens’ principle is very successful in the solution of problems of this kind, most of which cannot be solved on the basis of Newtonian mechanics. In fact, it is mainly because of these phenomena (especially diffraction) that the wave theory of motion is accepted. The physical community has become especially convinced of the correctness of the wave theory through interference phenomena. Interference causes photons or electrons to be in positions unexpected by Newtonian mechanics (and, conversely, these are not present at positions expected by Newtonian mechanics).
Plane waves must first be decomposed into spherical waves before such experiments can be explained. This is possible (and also the reverse: the decomposition of spherical waves into plane waves) because the spherical waves, as well as the plane waves, form complete sets, such that they can serve as a basis for the decomposition of the solutions of the wave equation. These two possibilities are not exclusive and are only two instances of an infinitude of possibilities. Thus in atomic and solid state physics one often uses a more limited set of plane waves, spherical waves, or even combinations of them.
More than anything else, Huygens’ principle shows the anticipatory character of the wave theory of motion. It can only manifest itself in the interaction of the particle with a rigid body, but in the mathematical description (which is only concerned with the motion of the particle) one completely abstracts from all the physical details of this interaction. The wave theory is a kinematic theory, developed anticipating the physical relation frame.
Time and again
7.8. The wave-particle duality
The distinction of waves as objects and wave packets as subjects in the kinetic relation frame shows that there is not really a wave-particle duality in a kinetic sense. Any physical and kinetic subject can only be represented as a wave packet, which has some of the characteristics of a particle – namely, a more or less precise position, momentum and energy. However, since the 19th-century mechanist doctrine maintains that all physical phenomena must be reduced to the motion of unchangeable pieces of matter, an elementary particle is defined (or rather deified) as a mass point having definite values at a particular time for its position, energy, and momentum. This philosophically coloured idea of a particle clashes with the concept of a wave packet. This the reason why wave theory and especially the Heisenberg relations have been the subject of so many discussions.[13] It should be stressed that the wave packet is only a modal and anticipatory description of moving, physically qualified subjects. Therefore it is limited in two respects.
First, the wave packet itself does not describe interactions in a subjective sense. This is in sharp contrast to the classical concept of moving particles, whose extension was supposed to be impenetrable (wave particles are far from that). The assumed impenetrability gives rise to the possibility of collisions between the particles – in fact the only kind of interaction admitted in Cartesian mechanics. Huygens’ principle only enabled him to give an objective description of the kinematical consequences of a very simple kind of interaction – namely, an interaction in which the wave packet collides with a rigid spatial system, and all physical details are disregarded. Hence, the collision between two atoms can be described by the wave theory only after the typical structure of the interaction is translated into spatial terms (the collision cross section). However, real interactions, especially those in which the internal state of one system is changed (e.g., if a system is absorbed), cannot be understood within the framework of wave theory alone.
Secondly, the wave theory gives a modal description and therefore discards all typical properties of the described subjects. It is quite irrelevant whether one is dealing with electrons or light quanta if Huygens’ principle is applied. The diffraction patterns made by a beam of light or by electrons of comparable wave lengths passing through a hole or a crystal are similar. This was predicted by Louis de Broglie in 1923 and confirmed soon afterwards by Clinton Davisson and Lester Germer.[14]
The kinetic character of diffraction and interference also manifests itself in the two-slit experiment in which a wave packet is split up. Interference of the two parts occurs as soon as they meet each other again.[15] It should be emphasized that in a kinematic sense the splitting of a wave packet into two parts (after passing a screen with slits, for instance) is no problem. It appears that after this transition one has two wave packets, two subjects spatially divided. Indeed, in a spatial sense, one cannot speak of one subject, if its parts are not connected. But wave packets are kinetic subjects, and its parts must not be spatially, but kinetically connected. Indeed, the two parts of the wave packet, after passing the double slit, are kinetically coherent. The well-known interference phenomena are explained by assuming that the two parts of the wave packet have well-determined phase relations.
The diffraction experiments especially emphasize the fact that the wave theory cannot account for the individuality of the particles and their individual interactions. The waves, rather than the particles, interfere in diffraction, reflection or refraction. This was not always clearly recognized. At first, some people tried to explain these phenomena by assuming that different particles interfere. But experiments soon showed that, if one has a very dilute beam with only one particle at a time in the apparatus, one still has the same diffraction pattern.
Thus one assumed that the waves in a single particle interfere with each other. But even this is objectionable. For example, interference between the beams emerging from two lasers is possible. In this case one can also dilute the beams such that no particles are present roughly 90% of the time and one particle is present roughly 10% of the time. The interference phenomena were decisively different if both lasers were open or if one was closed. Thus a particle emerging out of one laser interferes with the field of the other one, even when there are no particles coming out of the latter.[16]
The wave theory itself cannot give a full account of the individual behaviour of the described subjects. It has to be supplied with an interpretation which is no longer of a purely modal character. On the one hand, one has to give a probability interpretation of the waves describing the motion of the particle. The theory of probability is also an anticipatory one, which explains its strong connection with wave theory. On the other hand, it must be shown that the physical concept of a particle refers to a typical structure (chapter 11).
[1] This is the time needed to have the temperature difference decrease by a constant factor, the exponential unit (e = 2.781 …)
[2] Jammer 1966, 237ff; Hanson 1959, 450ff; Tolman 1938, 42.
[3] Margenau 1950, 182.
[4] Beth 1944a, 132ff.
[5] φ is an arbitrary unspecified number. It is sometimes called the phase, but just as often this name is used for the whole argument r+ct+φ. Note that r=|r|.
[6] Messiah 1958, 149.
[7] The Schrödinger equation as given above (7.3) must be slightly adapted to account for the occurrence of h.
[8] Margenau 1950, 150ff; Jammer 1954, 184; Russell 1927, 42.
[9] Heisenberg 1930; Jammer 1966, 323ff.
[10] Bunge 1967a, 267ff.
[11] Messiah 1958, Chapters 4 and 8.
[12] Huygens 1690.
[13] See, e.g., Bohr 1949; Jammer 1974, Chapter 3; Klein 1970; Margenau 1950, Chapter 16; Reichenbach 1951, Chapter 11; Price, Chissick 1977.
[14] Jammer 1966, 246, 251; Klein 1964.
[15] See on interference experiments Feyerabend 1962, 199ff; Jauch 1968, 112ff; Bohr 1949; Fine 1972; Reichenbach 1944, 24-32.
[16] Pfleegor, Mandel 1967.
Chapter 8
Individuality and probability
8.1. Individuality
8.2. Statistical measurements
8.3. Static theory of probability
8.4. Interpretations of probability
8.5. Classical statistical mechanics
8.6. The physical qualification of probability
Time and again
8.1. Individuality
Section 1.2 stated as the first basic problem of science: Are there general modes of experience which provide an order for everything within the creation, and if so, which are these universal orders of relation? Chapters 2-7 supplied an answer to this question by studying the first four modal aspects and their retrocipations and anticipations both on the law side and the subject side. However, both mathematics and physics are not only concerned with modal laws and subjects, but also with special laws like those of electromagnetism, and typical structures like that of the copper atom.
In other words, physics is also confronted with the second basic problem: How can stable things exist, and how can they change? Before the discussion of this question in part II, chapters 8 and 9 will pay attention to the law-subject relation for individual systems, leading to the theory of probability. Statistics applies the theory of probability to the properties of a collection or ensemble of systems with the same typical structure.[1] For the time being it will not be necessary to know which structure that would be.
The dynamic development of nature strongly depends on the existence of random processes. In a classical context this will be discussed in chapter 8, whereas chapter 9 is concerned with quantum physics. In the present section determinism will be critically reviewed, both in classical and in quantum physics.
Determinism in classical physics
The necessity of using probability in physics was not always recognized. Until the beginning of the 20th century, classical mechanics served as a deterministic prototype of the physical sciences. Mechanics is almost exclusively concerned with motion as a mode of being of physically qualified subjects. Abstraction took place on the subject side from all concrete properties which do not relate to the kinetic aspect. Each concrete thing is thereby reduced to a modal moving subject. Because it remains physically qualified, nevertheless, the retrocipatory aspects of interaction (mass, energy, force) have to be included. The simplest objects of mechanics are mass points, with forces acting between them.[2]
In a deterministic interpretation this kinematic aspect is absolutized. All other aspects, which together with the kinematic one determine concrete reality, are ignored or dismissed as secondary qualities. When mass, position, velocity, and external circumstances (seen as forces or force fields) are given in a specific point at a certain time, motion is fixed with relation to past and future. Even contemporary authors characterize particles as being localizable.[3]
On the law side a correction of this rigorous functionalistic determinism was offered by classical chemistry, whose basis was laid by Joseph Priestley, Antoine Lavoisier, John Dalton, Jöns Berzelius and others since the turn of the 18th century. It differed from mechanics in that it ascribed typical properties to its objects, the elements consisting of similar atoms, and the chemical compounds consisting of similar molecules.
In physics a merely modal, deterministic approach first began to fail on the subject side. The individuality of atoms and molecules made its entry, first in statistical mechanics, then in radio-activity, and finally in Brownian motion. In chemistry essentially probabilistic reasoning underlies the law of mass action in chemical equilibrium established by Cato Guldberg and Peter Waage (1864).
However, both chemists and physicists still believed in determinism. Statistical methods were only used for practical reasons because a fully deterministic calculation of the motion of the many particles constituting a gas was (and is) beyond human capabilities.[4] Although radioactivity was considered to be a mystery, at the turn of the 20th century physical scientists were still confident that it could be solved along deterministic lines, i.e., by a modal theory.
Indeterminism in quantum physics
All this changed as a result of the development of quantum physics in which better distinction is made between an individual system and its state. This state has, in a certain sense, a latent character for an isolated system, manifesting itself only if the system interacts with another one – for instance, but not exclusively, a measuring apparatus. According to quantum physics, the individual state of the system does not exactly determine the result of the interaction. The initial and final states of the system are not related in a purely modal, determined way, but by means of a probability law.
This so-called stochastic relation is therefore not lawless. The probabilities of the joint initial and final states as numerical predicates of possible interactions are determined by the typical structure (the law) for the interacting systems. There are many different interpretations of this state of affairs, three of which we shall briefly discuss.
A small number of physicists (among others, Albert Einstein,[5] Erwin Schrödinger, David Bohm[6], and Louis de Broglie) remained loyal to determinism and therefore hypothesized the existence of (as yet) unknown determining factors (called hidden variables). In his mathematical analysis of quantum physics, John von Neumann[7] has shown that hidden variables cannot weaken the indeterministic structure of quantum physics (if the latter is correct). Physicists who still consider determinism, or rather a purely modal theory, as exclusively acceptable, are forced to assume that although the quantum physical formalism accurately describes the phenomena, it is nevertheless incorrect or incomplete. In principle this view cannot be contradicted, but it is not very convincing as long as its proponents have not succeeded in designing a theory along these lines.[8]
A majority of physicists emphasized the measuring process.[9] According to this view, one does not really know anything about a closed system. Only the results of measurement are verifiable, and during measurement the examined system cannot be isolated. But the result of measurement is not only determined by the character and the state of the system, but also by the action of the measuring instrument. This is called the measurement disturbance. Taken by itself, this phenomenon is not invented in quantum physics, of course. Also classical physics knew about errors in measurement, but physicists believed that in principle the measurement disturbance could be made arbitrarily small. In quantum physics, this is no longer tenable. The discovery that all moving subjects must be described with the help of wave packets implies that measurement disturbance cannot be arbitrarily small.
According to the so-called Copenhagen interpretation[10] - of which there are several variants – it is quite possible that an isolated system is completely determined. However, this is considered to be a meaningless proposition because it is not experimentally verifiable. Within this concept, the problem of individuality of physical systems is disposed of as an epistemological problem about the relation between observing subject and observed object.
Niels Bohr once observed that ‘… a not-further analyzable individuality … has to be attributed to every atomic process …’[11] The individuality of atomic processes belongs to the heart of Bohr’s interpretation of quantum physics.[12] However, in Bohr’s view this individuality is not intrinsic to physical systems and processes, but arises from the relation between a human subject and a sub-human object. I admit that observations and measurements are human acts, which besides the logical and psychic aspects also have a physical one. But it is only this physical aspect which one needs to take into account in the discussion of the limitations of measurement. It arises from the interaction between the object of measurement and the measuring instrument (eventually the human senses). This implies that the object cannot be considered isolated.[13] Theoretically, the study of isolated systems is preferred, but in measurements one observes a system while it is interacting with a measuring instrument. In this interaction one does not have to consider a subject-object relation of observer and observed system, but a subject-subject relation of two interacting physically qualified systems.[14]
According to a third interpretation the state function is not related to a single system, but represents the way in which an ensemble of similar systems is prepared. It is possible to determine the state function by means of a large number of measurements on the ensemble, but this procedure is meaningless for a single system, which individuality must be ignored. The state function is an expression of our knowledge of the ensemble.
All three interpretations emphasize undeniable states of affairs. Two aspects which they have in common can be criticized.
First, they all refer, implicitly or explicitly, to the deterministic interpretation of classical physics without being sufficiently aware of its philosophical bias. In the first interpretation, the determining factors are taken to be unknown as yet. In the second it is posited that they cannot be measured if they exist. In the third one takes recourse to the ensemble because it is assumed to be fully determined. Hence there is, in effect, no break in principle with 19th-century determinism. For instance, Heisenberg posits that only its premise is invalid, i.e., the premise: ‘If at a certain moment position and velocity of all particles are known.’ [15] Heisenberg therefore does not consider determinism as incorrect, but rather inapplicable in quantum physics.[16]
Secondly, the mathematical formalism of physics, in fact, does not receive its due. It is generally accepted that the theory has a statistical character. The first interpretation mentioned above does not recognize that the formalism describes the phenomena accurately, but it refuses to accept the conclusion that physical phenomena themselves have a stochastic character displaying individuality. The second interpretation misses the point that measurement disturbance has no significance for the calculation of the probable measurement results. The third interpretation ignores the fact that the mathematical formalism ascribes a state function to each separate system. Moreover, it must be observed that the application of statistical laws, for example, in quantum physics with respect to radioactivity assumes that the decay of different atoms constitutes statistically independent events.[17]
The underestimation of the mathematical formalism is not so strange because the formalism is generally considered to be merely a handy framework within which empirically discovered physical law structures can be summarized. After all, is not mathematics a free creation of the human mind? This is true as far as mathematics is a theoretical opening up of some modal aspects of temporal reality. But these are modal aspects of concrete reality which make its understanding possible. The mathematical formalism of quantum physics is more than a convenient representation of human knowledge of inorganic structures. It is the theory regarding their mathematical aspects, and an objectification of their physical aspect.
Individuality in physics and philosophy
Quantum physics does not prove that individuality may be attributed to physically qualified subjects, but it leaves room for such a conclusion. No special science can solve this philosophical problem. A scientific theory, seeking as a matter of course to stay close to empirical concrete reality, is able to display a deterministic structure excluding the possibility of individuality, but is also able to leave room for individuality. The former is the case with classical physics while the latter occurs in modern physics.
In itself it is correct that science takes distance from individuality. Science involves abstraction, and the first abstraction to be made is one from individuality. A solid state physicist will do many experiments with a single crystal, yet his interest is not directed to this one crystal, but extends either to the modal physical laws to which the crystal is subjected or to its typical structure. In the analysis of the results of his measurements he constantly abstracts from the subjective individuality of the object of measurement. In this respect quantum physics disregards individuality as much as classical physics did.
However, it has become necessary to account for the fact that natural phenomena cannot be completely described in a deterministic way. This is a philosophical matter, and before one can start its analysis, one has to make a choice concerning the individuality of natural subjects, whether it will be accepted as a matter of fact or not. According to determinists, the assumption of determinism in matter is less result than condition for science.[18] After posing the dilemma: Natural necessity (fully determined by law) or chance (in the sense of absolute arbitrariness), they reject the latter.[19] In particular they reject the subjective individuality of e.g. radioactive particles, each having separate existence.[20]
It is also possible to reject the dilemma,[21] replacing it by the correlation of law and subject, which cannot be reduced one to the other. Determinism reduces the subject to the law while pure chance eliminates the law. In my view, individuality is not an afterthought, a result of a conclusive analysis, but a premise for understanding physics.
Time and again
8.2. Statistical measurements
In experimental physics measurements are usually repeated many times. Often every single measurement already yields a meaningful result, and one only repeats the measurement in order to improve on the accuracy by elimination of possible errors. In statistical measurements on the other hand, a single result has no immediate meaning. For instance, if one wants to determine whether a certain die is a fair one, a large number of trials have to be performed to find out whether the distribution of throws over the six possibilities confirms the typical law for a die.
Until the end of the 19th century this type of statistical measurement was not very important in physics. What is usually called statistical physics does not owe its name to its measurement procedure, but to a theoretical explanation (with statistical means) of macroscopic properties assumed to be generated as the average result of the relative motion and mutual interaction of the composing molecules. For this reason the theory of measurement as discussed in chapter 3 may be called classical.
Statistical measurements, especially those in (sub-) nuclear, atomic and molecular physics, first became important in the discovery of radioactivity. Their importance was enhanced in the interpretation by Albert Einstein (1905) of the molecular motion discovered by Robert Brown (1827) and measured by Jean Perrin (1908), and the scattering experiments by Ernest Rutherford (1911).
Such an experiment may proceed in the following way. A number of atoms is prepared in the same initial state with the help of a so-called state selector. For instance, the atoms may all have the same initial momentum and energy (within the margins set by Heisenberg’s relations). This state is disturbed by some interaction with a scattering system. Finally one measures how the state of the system is changed – for instance, the angle of deflection is measured. In this way Rutherford determined the size of gold nuclei.
Generally, the atoms will not react in the same way to the disturbance. Therefore, this experiment must be repeated many times in order to find the spectrum of the measurement results, and the statistical distribution of this spectrum. The former shows us the possible final states for the interaction, whereas the latter is determined by the relative probability of a final state for a given initial state. The experiment is repeated for other initial states in order to determine the transition probability, connecting a certain initial state with a certain final state.
Thus in statistical measurements we have both a counting procedure (the determination of the statistical distribution) and a measuring procedure (the determination of the spectrum of some measurable property).The latter does not differ basically from what was discussed in chapter 3. Only if the spectrum is continuous it has to be broken up into a discrete number of intervals, in order to make it possible to count the number of occurrences in each interval. Counting is not directly possible with respect to a continuous spectrum.
Time and again
8.3. Static theory of probability
The theory of probability first of all has to account for two things: the spectrum of possible properties (which are simultaneously possible, so that the spectrum displays a spatial ordering), and the statistical distribution of relative frequency of occurrence of these possibilities, which has a numerical character. This section briefly recalls the formal topological properties of the spectrum and its measure.[22]
Probability is a numerical measure over a set of possibilities,[23] formally defined as a non-negative numerical measure P(A) for any sub-set A of U, [24] such that:
- P(A)>0
- P(U)=1 (normalization)
- if A˄B=Æ, then P(A˅B)=P(A)+P(B) – probability is an additive measure on the disjoint sub-sets of U.
This definition is sufficient to prove a number of theorems, such as:
- P(Æ)=0
- 0<P(A)<1, for any sub-set A
- P(A˅B)=P(A)+P(B)-P(A˄B) for any two sub-sets A and B.
Two other important concepts are defined as:
- Conditional probability: if P(B)≠0, P(A/B)=P(A˄B)/P(B). Clearly, P(A) is just short for P(A/U).
- A is statistically independent of B if P(A/B)=P(A) and P(B/A)=P(B), i.e., if they have ‘no common cause’.[25]
This leads to the following theorems:
- if A and B are disjoint (A˄B=Æ): P(B/A) = 0
- if AÌB: P(A˅B)=P(B), P(A˄B)=P(A), P(A/B)=P(A)/P(B).
- if A and B are statistically independent: P(A˄B)=P(A).P(B).
Note that the property of statistical independence is a property of the spectrum and not of the statistical distribution.
This means that probability as a measure on a set U is not the only measure satisfying the above definitions and theorems. If U has a finite number u of elements, P(x) can be interpreted as the number of elements in the sub-set A, divided by u, i.e., the relative number of elements in A. Since this interpretation has the same formal properties as probability, the two are isomorphic. This isomorphy is the theoretical basis of the measurement of probability. It can also be used in the statistical definition of entropy.
If U is a spatial figure, and A is a spatial part of U, P(A) can be interpreted as the spatial magnitude of A (i.e., its lengths, area, or volume) relative to that of U. This formal relationship with probability is used in the statistical conception of a phase space (6.6, 9.1). If a set consists of n mutually statistically independent subsets, it can be projected onto an n-dimensional space. For instance, the possible outcomes of casting two dice simultaneously are represented on a 6x6 diagram.[26]
Time and again
8.4. Interpretations of probability
The formal system described above does not determine the probability function beyond its limits. For all sub-sets A not equal to Æ or U, P(A) is only known to lie between zero and one. A further specification is needed which can only be found by studying the typical properties represented by the set U. P(A) is a measure or weight function of the sub-sets A relative to U. Three cases can be considered.
(a) It is often possible to assume on rational grounds that different sub-sets have equal weight, because of some symmetry relation. In this way simple problems can be solved, such as occur in dice or card playing, assuming that the dice are not loaded and the card players are honest. In several more complicated problems which occur in quantum physics, for example, the symmetry of the systems concerned can facilitate their solution.
(b) Sometimes it is possible to design a theory to calculate weights which are not equal because of symmetry. In classical statistical physics one finds the beginning of this approach (8.5). It is fully developed in quantum physics (chapter 9).
(c) If there is no theory available, the only way to determine the probability function is by experiment. Even in this case the law is not reduced to the subject side. Also frequency hypotheses based on statistical extrapolation, such as mortality tables, can only be used if they are assumed to represent some kind of regularity, since there is no logical justification for the conjecture that frequencies will remain constant, and thereby permit extrapolation.[27] Probably (without many exceptions), all statistics in the non-physical sciences is of this type. In the first two cases, (a) and (b), experiments also remain important, of course. Theories are never a priori, but hypothetical, and must therefore be checked experimentally.
The fact that the probability function depends on the typical structure represented by the set U, and that there are three possibilities of determining this function, has not always been recognized clearly enough. This may explain why there is so much disagreement about the interpretation of probability.[28]
Ontic and epistemic probability
Probability in an ontological context is often confused with the epistemological probability of a statement. Ontologically, probability does not refer to knowledge (or lack of it), but to the variation allowed by a character. Determinists assume that probability is an epistemological matter. Ontologically, any system would be completely determined by physical laws. Probability is only applied because of the investigator’s lack of sufficient knowledge of a system. Only for quantum physics, intrinsically stochastic processes are acknowledged.
However, this view does not withstand scrutiny. Consider the most simple example of throwing a die. It is assumed that the outcome could be predicted if one knew the system in sufficient detail. If one pursues this path to the atomic level, one inevitably reaches a point where quantum fluctuations start to play a part. Therefore, if one accepts ontological indeterminacy at the quantum level, one has to accept it at a macroscopic level as well. One could not even say that for practical purposes, one could accept that the result of throwing a die is determined by physical laws, for the application of this principle to any practical case is virtually impossible. In fact, in any play of chance one had better start from a distribution of chances based on the symmetry of the game, and on the assumption that the actual process is stochastic.
Logicist philosophers who do not recognize the typical law determining the probability function and the set U, conceive of probability as a logical relation between propositions.[29] The theory is especially designed to give account of the inference of laws from empirical facts.[30] In my view, the outcome of experiments as described in section 8.2 reveals the individuality of the interacting subjects, and cannot be accounted for in a purely logical way. In science, probability does not describe our knowledge of physical systems, but their lawfully determined individual behaviour. Margenau rightly rejects this logicist interpretation as being irrelevant in science.[31]
Classical interpretation
The classical interpretation, drafted by Blaise Pascal, Abraham de Moivre, Daniel Bernoulli, Pierre-Simon Laplace and others, is directed to the first possibility described above. It is applicable if the symmetry of the problem allows us to find disjoint sub-sets A of U, such that these sub-sets have equal weight and together add up to U.[32] Therefore, the classical interpretation assigns equal probabilities to equally favourable cases. The founders of this theory were mainly inspired by games of chance, such as dice playing. This theory is also applied in classical mechanics (8.5). It clearly breaks down if no equally favourable possibilities can be found.
The classical view is sometimes criticized because of its alleged circularity, the equally favourable cases being definable because they have equal probabilities. But, as our examples show, in those cases covered by the theory the equally favourable cases are inferred from the symmetry of the systems. Thus the classical theory can only be criticized because of its limited scope, and is, in fact, still of great importance – e.g., in quantum physics (chapter 9).
The frequency interpretation
At the other extreme (clearly referring to the third possibility) one finds the definition of probability as a relative frequency of occurrence.[33] As a definition, it reduces the law to the subject side, or metric to measurement. Indeed, the measurement of probability can only be performed by determining the relative frequency of the occurrences of every possible case.[34] But it seems a somewhat defeatist reaction to the failure of the classical definition if it assumes that in no case a lawful metric for this probability can be found. Anyhow, such laws can be found in quantum physics.
In his early publications Karl Popper[35] defended a variant of the latter view. Later he developed the classical theory into the ‘propensity interpretation’ of probability. It corresponds to the second possibility (b) mentioned above, but introduces ‘weighted’ instead of ‘equal’ probabilities. According to Popper, we have to
‘… interpret these weight of the possibilities (or of the possible cases) as measures of the propensity, or tendency, of a possibility to realize itself upon repetition’.[36]
My view comes quite close to Popper’s interpretation as far as classical probability is concerned. I distinguish the formal theory described in section 8.3 from the typical law which varies for different systems. Moreover I distinguish the law side, defining the set U of possible cases (the spectrum), and the probability function describing their weights, from the subject side (actual occurrences of the possible cases). These principles were applied in classical statistical mechanics.
Time and again
8.5. Classical statistical mechanics
The main application of probability theory in classical physics is statistical mechanics. It is based on the assumption that a gas, e.g., consists of a large number of similar molecules, which can only differ by their position, velocity, mass, and moment of inertia. The kinds of motion considered are linear motion and rotation (sometimes also vibration). Rotation and vibration are only considered in polyatomic molecules since atoms are supposed to be point-like. It is of interest to mention this because, if the finite extension of the atoms is taken into account, the method of classical statistical mechanics breaks down.
Statistical physics is often thought of as replacing thermodynamics, or at least providing its foundations. But statistics is not a purely modal theory (because of the assumption of the existence of similar molecules) and therefore cannot be the basis for the entirely modal thermodynamical theory. On the other hand the latter is inferior to statistical mechanics which can, by its nature, be applied to typical problems. Statistical mechanics is also easy to be incorporated into quantum physics – which is required if one wishes to understand why classical statistical mechanics is applicable at all. Finally, thermodynamics is mainly retrocipatory (chapter 5), whereas statistical physics is anticipatory. In classical statistical mechanics there are two approaches, put forward mainly by James Clerk Maxwell, Ludwig Boltzmann, and Joshua Gibbs. I shall briefly discuss these in order to show the application of the formal theory as discussed above.
Maxwell and Boltzmann
The starting point of the approach by Maxwell and Boltzmann is the so-called Maxwell distribution for the molecules in an ideal gas.[37] The following assumptions are made. (a) The molecules are fully described by their position r, velocity v, and mass m. (b) The particles do not interact with each other. This implies that the probability of finding a particle with a certain value for (r,v) is independent of the positions and velocities of other particles. Thus it is sufficient to derive the one-particle probability function which must be multiplied by the number (N) of molecules in order to find the distribution function for the gas. (c) The distributions for r and for v, respectively f1(r) and f2(v), are mutually independent: f(r,v)=f1(r).f2(v). (d) There is equilibrium, which means that the distribution function is spatially homogeneous (if there is no external field) and isotropic. Homogeneity means that f1(r)dr=const.dr. The constant is found by normalization and is equal to the inverse of the volume V of the gas: f1(r).dr=(1/V)dr. Isotropy implies that the probability function is independent of the direction of the molecular speeds: f2(v)=f2(|v|), or f2(v)=f2(|v|2)=f2(vx2+vy2+vz2). (e) The three coordinates vx, vy, vz are mutually independent, meaning that f2(v)=fx(vx). fy(vy). fz(vz).
These five assumptions are sufficient to show that
The factor a can be found by normalization, the factor -½mβ by calculating the pressure P a gas like this would exert on the wall. One finds 1/β=PV/N, which means that β=1/kT because of Boyle’s law: PV=NkT (T is the temperature, k is Boltzmann’s constant, which value is only determined by the choice of the units).
It will be clear that the Maxwell distribution is found from symmetry arguments.[38]
Boltzmann recognized that the term in the exponent is just the kinetic energy of the molecule, divided by –kT. If we now introduce the concept of a state of the molecule, characterized by its velocity and position, we find that the relative probability of finding a particle in either one of two states with energies E1 and E2 is
This was generalized by Boltzmann (and it is still the foundation of all statistical physics) to any system in equilibrium consisting of molecules or other particles which can freely exchange energy. It is nothing but an a priori assumption concerning ‘equally favourable cases’. If two states have the same energy, their probability is the same. If they have different energies, their relative probability is given by the Boltzmann factor (as it is called). If the set of possible states has a continuous spectrum (which is not the case for a classical gas), it is the probability density which is determined by the Boltzmann factor.
Microstates and macrostates
Whereas Maxwell and Boltzmann considered one system consisting of many molecules, Joshua Gibbs[39] studied an ensemble, an infinite number of systems similar in their structure and boundary values, but with different microstates (10.2). Above we defined the state of a single molecule as being characterized by its position and velocity. The microstate of a system of molecules is the juxtaposition of all molecular states, while the macrostate of the system enumerates its macroscopically determinable properties, such as volume, pressure, and temperature.
The microstate of a system can be represented by a point in a 6N-dimensional phase space, N being the number of molecules in the system. There is a many-to-one relationship between microstates and macrostates. Many microstates may correspond to a certain macrostate, but a microstate fully determines the corresponding macrostate. Therefore, if all microstates are equally probable, the relative probabilities of macrostates are proportional to the numbers of their corresponding microstates. In the case of a continuous spectrum of possibilities a macrostate can be represented by a region in the 6N-dimensional space of microstates, and its probability is proportional to the volume of this region (6.6).
According to Gibbs all microstates are equally probable, as far as they are accessible by the system, i.e., as far as they are compatible with one or more restrictions or constraints. In the case of a completely isolated system, for which the energy is constant, Gibbs introduced the ‘microcanonical ensemble’. Here, all microstates with the sameenergy are equally probable (other states, not being accessible, have probability zero). For systems at constant temperature, for which the energy may fluctuate, Gibbs defined the ‘canonical ensemble’, in which the relative weight function for different microstates is the Boltzmann factor, exp–(E1E2)/kt. Finally, if the number of molecules, as well as the energy, is undefined, the ‘grand canonical ensemble’ applies, with the weight function exp–[(E1E2–(N1N2)μ]/kT, called the Gibbs factor, wherein μ is the ‘chemical potential’. In this theory, the entropy, the free energy, and other important thermodynamic variables can easily be defined.
The approaches of Maxwell, Boltzmann and Gibbs are mentioned here, in the first place, to show the a priori character of their basic assumptions. These can only be justified ‘… by the correspondence between the conclusions which it permits and the regularities in the behaviour of actual systems which are empirically found.’[40]
This discussion also shows the typical and individual character of these theories. In both approaches, the similarity of the systems to be studied is a basic assumption. In the 19th century each molecule was assumed to be identified by its position and velocity at any time. But quantum physics has shown that the position of every individual molecule is not relevant, but only the distribution of all molecules over the accessible states. What is relevant is that a point in the six-dimensional phase space is occupied, not which molecule happens to be there.
The classical approach as seen from the viewpoint of quantum statistics
In quantum physics the symmetry of the state function for similar particles allows of two possibilities. A single molecular state can be occupied by at most one particle if it is a fermion, or by an unlimited number of particles if they are bosons (11.6). The Maxwell-Boltzmann distribution is now only a limiting case (for very small occupation probabilities) of the more fundamental distribution functions: Fermi-Dirac, for fermions, and Bose-Einstein, for bosons. Whether a particle is a fermion or a boson is determined by its typical structure.
Once again this shows that statistical physics is not a fully modal theory, although it has very general features. In fact, the correct derivation of the classical Maxwell-Boltzmann distribution can only be given from quantum statistics because the classical assumption of the complete identifiability of the molecules in kinetic terms leads to an overestimation of the number of possible microstates.[41]
There are more considerations indicating that the classical approach can only be justified by quantum physics. One is the assumption that monatomic molecules have only three degrees of freedom (i.e., the number of coordinates necessary to specify the molecule’s relative position), whereas diatomic molecules, for instance, have two additional degrees of freedom.[42] Especially when the internal structure of atoms consisting of a nucleus and several electrons was discovered, it became clear that this assumption is incomprehensible from a classical point of view.
However, quantum physics accounts for the existence of discrete energy levels which are dependent on the internal structure of atoms and molecules. These levels are widely spaced, such that electronic transitions from the ground state to the first or higher states do not occur at normal temperatures. But rotational states for diatomic molecules are less widely spaced, and therefore they can be excited easily at room temperature. This has consequences for the specific heat of a gas, which is nearly equal to (3/2)NkT for a monatomic gas, as well as for a diatomic gas at 50 K, whereas this value increases to (5/2)NkT for diatomic gases at higher temperatures. Both the temperature dependence of the specific heat, and more fundamentally, the applicability of classical statistics to normal gases, can therefore be understood only from the quantization of energy levels according to quantum theory (chapter 9).[43]
Time and again
8.6. The physical qualification of probability
Although probability is presented as a numerical measure over a set of possibilities, it is also physically qualified in classical as well as in quantum theories. In either case one of the more or less probable possibilities must be actualized. This actualization only occurs in some interaction – for example, shuffling cards, throwing dice, interactions in classical and quantum physics. The temporal order of possibility and its actualization is clearly asymmetrical, anticipating irreversibility.
The ergodic problem
The theories of Boltzmann and Gibbs lead to a description of the equilibrium state of a system as the most probable state. Ludwig Boltzmann explained the irreversible approach to equilibrium by the assumption that any actual system will proceed through all accessible states, such that the spatial average in phase space is equal to the temporal average for a single system. This so-called ergodic theorem (or a weaker quasi-ergodic theorem, according to which every accessible microstate will be approached arbitrarily close after some time) has been the subject of intensive mathematical research, but cannot be proved except for very simple systems under severe restrictions.[44]
Apparently, this problem cannot be solved in a purely modal theory, because it only has meaning if the systems in the spatial ensemble all have the same structure, whereas in calculating the temporal average it is assumed that the system retains its typical individuality during its passage through all possible states. The fact that the two averages must be the same is therefore not something which must be proved, but lies at the basis of all statistical methods. It assumes that the same system has a constant typical structure, or that similar systems are subjected to the same structural law. It says that the typical law is valid during any time, and for all systems under consideration.
Internal interactions
The calculation of the entropy and related properties of a system is usually possible only for simplified systems of non-interacting molecules, such as the ideal gas, or the linear chain of magnetic molecules.[45] It is remarkable that such a system will not do the job. Because the molecules do not interact with each other, the microstate of the linear chain will never change, and in a perfect gas mixture, there is no diffusion. If the microstate happens to correspond to a non-equilibrium macrostate (e.g., due to its preparation), it will never go to equilibrium as every actual system does. Thus we assume that there is some interaction between the molecules, small enough not to destroy the results of the calculation, but large enough to change the microstates so rapidly, that the temporal average may be equated with the calculated spatial phase average for the system.
Boltzmann systematically introduced the interaction in the six-dimensional phase space.[46] For an arbitrary (because unknown) interaction he substituted a collision probability function for pairs of molecules, describing the probability at any time for given positions and velocities before the collision, to find the change in velocity caused by the interaction. The theory leads to the so-called Boltzmann-equation which can account for many phenomena like viscosity, diffusion, thermal and electric conduction, if certain assumptions are made concerning the interaction. This approach is also useful in quantum physics. In this formalism a function H (èta) can be defined, which decreases in time for any system consisting of interacting molecules until the system has reached its equilibrium state in which H is constant. (This equilibrium state is again the Maxwell-Boltzmann distribution).[47]
The function H can be connected to the entropy of the system, and therefore Boltzmann’s theory was hailed as deriving the irreversible approach to equilibrium from reversible kinematics (chapter 6). Here it suffices to observe that this derivation depends essentially on the interaction between the molecules, and therefore is not of purely kinematic character. Moreover, the derivation makes use of probability theory and must then distinguish between actual states in the past, and possible states in the future, meaning that irreversibility is presupposed from the start.
Because of the difficulties inherent in Boltzmann’s and Gibbs’ approaches as to the explanation of irreversibility, modern treatises no longer try to derive irreversibility from essentially mechanical systems. Rather, irreversibility is introduced from the outset. This is especially done in the form of Markov processes in which the state of a system is determined by the preceding states. This approach also has its difficulties, especially because of the continuity of time. On the other hand, however, it has possibilities not shown by the classical methods.[48]
Finally, in all applications of probability theory the initial state forms a separate problem, at least in physics.[49] Although the initial state may be partly determined by some previous interaction or preparation, it necessarily has an amount of disorder, ‘molecular chaos’, or ‘randomness’.[50] One has tried but never succeeded in defining randomness. It appears that one has to accept it as a primitive concept. For instance, when checking probabilities in dice playing, it is assumed that the way the dice are thrown does not influence the result in the mean. An honest card player is assumed to shuffle his cards at random. And in an opinion poll one has to strive for a representative sample. There are criteria to avoid biased samples, but there is no universal criterion to establish a completely random sample.
Randomness may be considered another expression of the individuality of the systems concerned, which cannot be fully delimited by specifying some of their properties. On the one hand, complete randomness does not exist. Statistical predictions can only be made with respect to systems of which at least something is known of their typical structure. On the other hand, probability without randomness is useless. In quantum physics the initial state determining the statistical distribution also contains an element of randomness. According to a theorem related to the Heisenberg relations, if any property is completely determined by its preparation, the ‘canonically conjugate’ property is completely random. In general, the initial state in quantum physics can be better specified than in classical physics. But even then it always contains an undetermined phase.
[1] Tolman 1938, 2, 43.
[2] Einstein 1949, 19ff.
[3] Akhieser, Berestetsky 1953, 17; Messiah 1958, 4, 138; Čapek 1961, chapter 14; Bunge 1967a 108, 24.
[4] Reichenbach 1956, 56.
[5] Einstein 1949, 82ff; Klein 1964, 1970; Hooker 1972.
[6] Bohm 1957.
[7] Von Neumann 1932; see also Jauch 1968, chapter 7; Jammer 1966, 366ff; 1974, 265ff.
[8] On the completeness of quantum theory, see Jammer 1966, 366ff.
[9] See Bohr 1934, Introduction: ‘The aim of science is to extend as well as to order our observations …’.
[10] Heisenberg 1958, chapters 3 and 8; Losee 1964; Hanson 1959.
[11] Jammer 1966, 347; see Bohr 1949, 209, 223, 230.
[12] Meyer-Abich 1965, 102.
[13] Bridgman, in Henkin et al. (eds.), 229.
[14] Čapek 1961, 303f.
[15] Heisenberg 1955, 29: ‘… dass die unvollständige Kentnis eines Systems ein wesentlicher Bestandteil jeder Formulierung der Quantentheorie sein muss.’ For a criticism of this view, see Popper 1967.
[16] Jammer 1966, 330; 1974 75ff; see also Heitler 1949, 192.
[17] Cp. Hempel 1965, 392.
[18] Van Melsen 1946, 138ff; 1955, 148ff, 271ff. The view that determinism is instrumental for any science is also expressed by Claude Bernard, cf. Kolakowski 1966, 90ff.
[19] Van Melsen 1946, 157ff; 1955, 285ff.
[20] Van Melsen 1955, 300.
[21] Čapek 1961, 338ff.
[22] In the set U of all possibilities (the ‘universe of discourse’, or ‘sample space’) having sub-sets A, B, … one distinguishes the union of two sub-sets A˅B and the intersection of two sub-sets A˄B. An element of U is an element of A˅B if it is an element of A, or of B, or both. It is an element of A˄B if it is an element of both A and B. We call A and B disjoint, if A˄B=Æ, the empty set containing no element. A is a subset of B, or B includes A (AÌB), if A˅B=B, or A˄B=A. We call –A the complement of A, if (-AA =U, and (-AA=Æ. A˅B, A˄B, and –A are sub-sets of U, if A and B are.The following set-properties can easily be derived: A˄B=B˄A; A˄U=A; A˅U=U; A˄A=A; A˅B=B˅A; A˅A=A; A˅Æ=A; Æ=Æ; -U=Æ; -(-A)=A. These definitions and properties do not define a group, but a so-called Boolean algebra. Boole 1854; Suppes 1957, 202ff; another approach is that of a Borel set.
[23] Nagel 1939, 92 ff; Bunge 1967a, 89-93; Popper 1959, 326ff; 1967; Jauch 1968; Hempel 1965, 386ff; Hesse 1974, Ch. 5; Suppes 1957, 274-291.
[24] Observe that the theory ascribes a probability to the subsets, not to the elements of a set.
[25] Reichenbach 1956, 157ff.
[26] Genetics calls this a Punnett-square, after R.G. Punnett (1905). If E is a spatial figure with unit magnitude, p(A) is the magnitude of a proper part of the figure. Hence, so far the theory is not intrinsically a probability theory.
[27] Popper 1959, 168f.
[28] Braithwaite 1953; Carnap 1950; Jammer 1974, 7; Nagel 1939; Margenau 1950, chapter 13; Poincaré 1906, chapter 11; Popper 1967.
[29] Keynes 1921; Jeffreys 1939; Hesse 1974.
[30] See Hempel 1965, 57ff, 381ff, 385: A mathematical theory of ‘inductive probability’ (as developed by Carnap) is only available for a relatively simple kind of formalized language; ‘… the extension of this approach to languages whose logical apparatus would be adequate for the formulation of advanced scientific theories is as yet an open problem’.
[31] Margenau 1950, 250ff; Popper 1967, 29.
[32] Popper 1959, 168.
[33] Von Mises 1939, 163-176; Reichenbach 1956, 96ff.
[34] Hempel 1965, 387 essentially supports the view of Von Mises and Reichenbach, although he criticizes their formulations. They define probability as the limit of the relative frequency in an infinite series of performances, and Hempel rightly observes that such series are not realizable. But this criticism does not touch the heart of the problem – namely, that the probability has both a law side and a subject side, and that the former cannot be reduced to the latter.
[35] Popper 1959.
[36] Popper 1967, 32; 1974.
[37] Maxwell 1860; Born 1949, 50f.
[38] Several details can be criticized, and there are other derivations, see Born 1949, 51ff; Tolman 1938, chapter 4.
[39] Kittel 1969; Tolman 1938, 43ff.
[40] Tolman 1938, 59; Kittel 1969, 34, 35; Popper 1959, 208.
[41] See e.g., Kittel 1969, 304-307, 390-392.
[42] This refers to possible rotations about two independent axes. The relative vibration of the two atoms leads to another degree of freedom
[43] Mott 1964.
[44] Truesdell 1968, 360-363; Khinchin 1947, chapter 3; Tolman 1938, 65ff; Penrose 1970, 39ff; Reichenbach 1956, 78-81; Prigogine 1980, 33-42, 64-65; Sklar 1993, 164-194.
[45] Kittel, Chapter 2ff.
[46] See, e.g., Tolman 1938, chapters 5 and 6.
[47] As observed in 6.6, it is essential in this derivation that the state of a system be described by a domain (not a point) in state space..
[48] See, e.g., Penrose 1970.
[49] In biology, or sociology, the related problem is that of the ‘population’, the ‘Kollektiv’, or a ‘representative sample’.
[50] Hempel 1965, 386; Nagel 1939, 32ff; Popper 1959, 151ff, 359ff.
Chapter 9
Probability in quantum physics
9.1. Wave theory of probability
9.2. Operators in Hilbert space
9.3. Static quantum probability theory
9.4. State preparation, randomness, and complementarity
9.5. Modal symmetry: energy and momentum
9.6. Spin
9.7. Typical symmetry
9.8. The temporal evolution of an isolated system
9.9. Actualization
Time and again
9.1. Emergence of the wave theory of probability
This section critically reviews the history of quantum physics. The development of its basic concepts involved a good number of years of concerted efforts on the part of many theoretical and experimental physics in many countries. The basic ideas of the theory were essentially established during a thirty year period (1900-1930), yet at the end of the 20th century there was still no agreement about the interpretation of its foundations. The subsequent sections discuss the mathematical framework, which was basically established in the years 1925-1930 by physicists such as Louis de Broglie, Erwin Schrödinger, Werner Heisenberg, Max Born, Wolfgang Pauli, Pascual Jordan, and Paul Dirac, and by mathematicians such as John von Neumann.[1] The so-called Hilbert-space representation is not the only one,[2] but suffices for the purpose to show the probabilistic character of quantum physics, and to point out how it differs from classical probability theory (chapter 8).
The general theory of quantum physics addresses five related problems:
(a) To find the spectrum of possible properties of the system under study (9.3).
(b) To give an objective description of the (initial) state of the system, to the extent that it is specified, and to the extent that it is at random (9.4).
(c) To determine the relative statistical weights associated with the possible properties of the system relative to its state. This implies the discussion of the external (modal) symmetries (9.5, 9.6) as well as the internal structure, partly expressed by internal symmetries (9.7).
(d) To determine the temporal development of the state during the time from one interaction to the next, and to treat the problem of interference (9.8).
(e) To explain the actualization of one of the possible properties via an interaction, which implies the distinction between possessed properties and latent propensities (9.8).
Hilbert space
It is most remarkable that at least the first four problems can be treated within the context of a single concept, that of a complex Hilbert space, with its associated hermitean operators. This concept is an abstract one, and has many realizations. All Hilbert spaces with the same number of dimensions are isomorphic to each other. The basic hypothesis of quantum physics says that the set of possible states of a system is isomorphic to all Hilbert spaces of a certain dimensionality, which depends on the typical structure of the system.[3]
Any property of the system is related to a coordinate system (a set of basis functions) in Hilbert space, such that the property’s spectrum is related to the dimension of that space. Properties with a number of possible values less than the dimension of the Hilbert space are called degenerate for that system. Degeneracy is always connected to some kind of symmetry.
The probability associated with a certain value of some property is determined jointly by the spectrum of that property and the state at the moment the interaction revealing (probing) that property takes place. Thus, while the concept of a Hilbert space provides the description of probabilities in an isolated system, at the same time it anticipates interaction. Its use breaks down as soon as we want to investigate the interaction itself. So the fifth problem mentioned above is at best partly solved.
Operators in Hilbert space
A Hilbert space is the dense and complete set of all linear combinations of a number of basis functions with complex coefficients (2.7). In this space for any pair of functions f1 and f2 a linear functional (f1,f2) exists and is called the scalar product. Now the concept of a linear operator is introduced as a mapping of the Hilbert space onto itself, more or less similar to a rotation in an Euclidean space of two or three dimensions.[4] If f and g are arbitrary functions in the Hilbert space H, then A is a linear operator if it transforms the function f into Af such that Af is a function in H. The identity operator I transforms each function into itself, and the zero operator reduces each function to zero. All linear operators in a Hilbert space form a group with respect to addition, with the zero operator as identity element, and with –A=(-1)A as the inverse of A. In general, multiplication of operators is not commutative. A and B are said to commute if AB=BA.
So the fifth problem mentioned above is at best partly solved.
Quantum physics is especially interested in hermitean operators (for which A=A+, the adjoint operator[5])and in unitary operators (defined by UU+=I, the identity operator).
Hermitean operators, eigenvectors and eigenvalues
An operator is called hermitean or self-adjoint if for any pair of functions f and g in H, (f,Ag)=(Af,g). Each hermitean operator generates a basis in the Hilbert space. This means, for any hermitean operator A there exist vectors ni such that Ani=aini, where ai is a real number. If normalized, the so-called eigenvectors or eigenfunctions ni of A have the properties of basis functions in H: (ni,ni)=1, (ni,nj)=0, for any i and j, i≠j. Moreover, the set of ni’s is complete, which means that any function in the Hilbert space can be written as a linear combination of those eigenvectors.
The real eigenvalues ai can serve to distinguish the eigenvectors. Therefore, if two mutually orthogonal eigenvectors have the same eigenvalue, all vectors in the two-dimensional space consisting of the linear combinations of these two eigenvectors are also eigenvectors. Hence an eigenvalue determines a subspace in Hilbert space, whether one-dimensional (non-degenerate eigenvalue) or multi-dimensional (degenerate eigenvalue).
If two hermitean operators commute, they have the same set of eigenvectors, but with different eigenvalues, and different degeneracy. To every unit vector ni of a basis in Hilbert space is connected a hermitean operator Pi, which transforms any function into its projection onto that unit vector. Thus, because f=Σ(f,ni) ni, we have Pif=(f,ni)ni. For the basis vectors themselves, Pini=1 and Pinj=0 if i≠j. Hence the eigenvalues of Pi are either one or zero (the latter is highly degenerate), and the projection operators can be used to describe yes-no experiments.[6]
Unitary operators and symmetry
A unitary operator U is defined by the property UU+=I, the identity operator. Thus unitary operators can form a multiplication group with I as the identity element, and U+ as the inverse of U. The application of a unitary operator to a basis leads to a new basis having the same the orthogonality and normalization properties. With this change of basis, a hermitean operator A is transformed into U+AU. If A and U commute, A is not changed (U+AU=U+UA=A). Therefore, unitary operators are very useful in describing symmetry operations, in which transformations of the state of the system are made without changing its properties.
Unitary operators turn out to be particularly useful for the description of the spatial and temporal homogeneity for isolated systems (9.5). For spatial isotropy one finds that the degeneracy of eigenvalues is not complete, so that the corresponding unitary operator is a two- or more-dimensional matrix (9.6).
Time and again
9.2. Problems concerning wave packets
Probability is a measure over an ensemble, a set of possibilities (8.3). If the set of possibilities is continuous this is a field over a space or a region in a space. Stating this once again shows the static character of classical probability theory, and points, at the same time, to the way to open it up in a kinematic sense. For the kinematics of a field leads to the theory of waves, in particular the concept of a wave packet as an aggregate of waves (chapter 7). This theory is not only applicable to physical fields in physical space, but to any field in any space, including probability. The actualisation of any possibility requiring a physical interaction finishes the development of probability.
A signal composed from a set of periodic waves is called a wave packet. Although a wave packet is a kinetic subject, it achieves its foremost meaning if its physical interaction is taken into account. The wave-particle duality has turned out to be equally fundamental and controversial. Neither experiments nor theories leave room for doubt about the existence of the wave-particle duality. However, it seems to contradict common sense, and its interpretation has been the object of hot debates.
Common sense dictated waves and particles to exclude each other, meaning that light is either one or the other. When the wave theory turned out to explain more phenomena than the particle model, the battle appeared to be over.[7] Light is wave motion, as was confirmed by Maxwell’s theory of electromagnetism. Nobody realized that this conclusion was a non sequitur. At most, it could be said that light has wave properties, as follows from the interference experiments of Young and Fresnel, and that Newton’s particle theory of light was refuted.[8]
A dualistic world view
At the end of the 19th century, this gave rise to a rather neat and rationally satisfactory world view. Nature consists partly of particles, for the other part of waves, or of fields in which waves are moving. This dualistic world view assumes that something is either a particle or a wave, but never both, tertium non datur.
It makes sense to distinguish a dualism, a partition of the world into two compartments, from a duality, a two-sidedness. The dualism of waves and particles rested on common sense, one could not imagine an alternative. However, 20th-century physics had to abandon this dualism perforce and to replace it by the wave-particle duality. All elementary things have both a wave and a particle character (7.8).
Almost in passing, another phenomenon, called quantization, made its appearance. It turned out that some magnitudes are not continuously variable. The mass of an atom can only have a certain well-defined value. Atoms emit light at sharply defined frequencies. Electric charge is an integral multiple of the elementary charge. In 1905 Albert Einstein suggested that light consists of quanta with energy E = hf. In Niels Bohr’s atomic theory (1913), the angular momentum of an electron in its atomic orbit is an integer times Max Planck’s reduced constant.[10] Until Erwin Schrödinger and Werner Heisenberg in 1926 introduced modern quantum mechanics, repeatedly atomic scientists found new quantum numbers with corresponding rules.
Louis de Broglie
In 1923, Louis de Broglie published a mathematical paper about the wave-particle character of light. [11] Applying the theory of relativity, he predicted that electrons too would have a wave character. The motion of a particle or energy quantum does not correspond to a single monochromatic wave but to a group of waves, a wave packet. The speed of a particle cannot be related to the wave velocity (l/T=ƒ/s), being larger than the speed of light for a material particle. Instead, the particle speed corresponds to the speed of the wave packet, the group velocity. This is the derivative of frequency with respect to wave number (df/ds) rather than their quotient. Because of the relations of Planck and Einstein, this is the derivative of energy with respect to momentum as well (dE/dp). At most, the group velocity equals the speed of light.[12]
In order to test these suggestions, physicists had to find out whether electrons show interference phenomena. Experiments by Clinton Davisson and Lester Germer in America and by George P. Thomson in England (1927) proved convincingly the wave character of electrons, thirty years after Thomson’s father Joseph J. Thomson established the particle character of electrons. As predicted by De Broglie, the linear momentum turned out to be proportional to the wave number. Afterwards the wave character of atoms and nucleons was demonstrated experimentally.
This meant the end of the wave-particle (or matter-field) dualism, implying all phenomena to have either a wave character or a particle character, and the beginning of wave-particle duality being a universal property of matter (7.8). In 1927, Niels Bohr called the wave and particle properties complementary.[13] Bohr also asserted that measurements can only be analyzed in classical mechanical terms, using arguments derived from Immanuel Kant.[14]
The dual character of physical particles
An interesting aspect of a wave is that it concerns a movement in motion, a propagating oscillation. Classical mechanics restricted itself to the motion of unchangeable pieces of matter. For macroscopic bodies like billiard balls, bullets, cars and planets, this is a fair approximation, but for microscopic particles it is not.[15] The experimentally established fact of photons, electrons, and other microsystems having both wave and particle properties does not fit the still popular mechanistic world view. However, the theory of characters (10.2) accounts for this fact as follows.
The character of an electron consists of an interlacement of t |
b08e2939f30efbea | Ab initio study of the photoabsorption of {}^{4}He
Ab initio study of the photoabsorption of He
W. Horiuchi RIKEN Nishina Center, Wako 351-0198, Japan Y. Suzuki Department of Physics, Niigata University, Niigata 950-2181, Japan RIKEN Nishina Center, Wako 351-0198, Japan K. Arai Division of General Education, Nagaoka National College of Technology, Nagaoka 940-8532, Japan
There are some discrepancies in the low energy data on the photoabsorption cross section of He. We calculate the cross section with realistic nuclear forces and explicitly correlated Gaussian functions. Final state interactions and two- and three-body decay channels are taken into account. The cross section is evaluated in two methods: With the complex scaling method the total absorption cross section is obtained up to the rest energy of a pion, and with the microscopic -matrix method both cross sections He()H and He()He are calculated below 40 MeV. Both methods give virtually the same result. The cross section rises sharply from the H+ threshold, reaching a giant resonance peak at 26–27 MeV. Our calculation reproduces almost all the data above 30 MeV. We stress the importance of H+ and He+ cluster configurations on the cross section as well as the effect of the one-pion exchange potential on the photonuclear sum rule.
25.20.Dc, 25.40.Lw, 27.10.+h, 21.60.De
I Introduction
Nuclear strength or response functions for electroweak interactions provide us with important information on the resonant and continuum structure of the nuclear system as well as the detailed property of the underlying interactions. In this paper we focus on the photoabsorption of He. The experimental study of and reactions on He has a long history over the last half century. See Refs. shima (); nilsson (); quaglioni04 () and references therein. Unfortunately the experimental data presented so far are in serious disagreement, and thus a measurement of the photoabsorption cross section is still actively performed with different techniques in order to resolve this enigma nakayama (); tornow ().
Calculations of the cross section on He have been performed in several methods focusing on e.g., the peak position of the giant electric dipole () resonance, charge symmetry breaking effects, and sum rules efros (); wachter (); gazitb (). The photoabsorption cross section has extensively been calculated in the Lorentz integral transform (LIT) method LIT (), among others, which does not require calculating continuum wave functions. In the LIT the cross section is obtained by inverting the integral transform of the strength function, which is calculable using square-integrable () functions. The calculations were done with Malfliet-Tjon central force quaglioni04 (), the realistic Argonne 18 potential gazit (); bacca (), and an interaction based on chiral effective field theory quaglioni ().
In the calculations with the realistic interactions some singular nature of them, especially the short-range repulsion, has been appropriately replaced with the effective one that adapts to the model space of the respective approaches, that is, the hyperspherical harmonics method gazit (); bacca () and the no-core shell model quaglioni (). All of these calculations show the cross section that disagrees with the data shima () especially in the low excitation energy near the H+ threshold. The resonance peak obtained theoretically appears at about 27 MeV consistently with the experiments nilsson (); nakayama (); tornow (), but in a marked difference from that of Ref. shima ().
We have recently reported that all the observed levels of He below 26 MeV are well reproduced in a four-body calculation using bare realistic nuclear interactions dgvr (); inversion (). It is found that using the realistic interaction is vital to reproduce the He spectrum as well as the well-developed + (H+ and He+) cluster states with positive and negative parities. In this calculation the wave functions of the states are approximated in a combination of explicitly correlated Gaussians boys (); singer () reinforced with a global vector representation for the angular motion varga (); svm (). Furthermore this approach has very recently been applied to successfully describe four-nucleon scattering and reactions arai11 (); fbaoyama () with the aid of a microscopic -matrix method (MRM) desc (). It is found that the tensor force plays a crucial role in accounting for the astrophysical factors of the radiative capture reaction HHe as well as the nucleon transfer reactions, HH and HHe arai11 ().
The aim of this paper is to examine the issue of the photoabsorption cross section of He. Because four-body bound state problems with realistic nucleon-nucleon () interactions can be accurately solved with the correlated Gaussians, it is interesting to apply that approach to a calculation of the photoabsorption cross section. For this purpose we have to convert the continuum problem to such a bound-state like problem that can be treated in the basis functions. Differently from the previous theoretical calculations quaglioni04 (); gazit (); bacca (); quaglioni (), we employ a complex scaling method (CSM) ho (); moiseyev (); CSM () for avoiding a construction of the continuum wave functions. One of the advantages of the CSM is that the cross section can be directly obtained without recourse to a sophisticated inversion technique as used in the LIT or an artificial energy averaging procedure. We will pay special attention to the following points:
1. To use a realistic interaction as it is
2. To include couplings with final decay channels explicitly
3. To perform calculations in both MRM and CSM as a cross-check.
Here point (1) indicates that the interaction is not changed to an effective force by some transformation. This looks sound and appealing because the cross section may depend on the -state probability of He wachter () and hence the effect of the tensor force on the cross section could be seen directly. In point (2) we make use of the flexibility of the correlated Gaussians to include such important configurations that have H+, He+, and ++ partitions. Thanks to this treatment the effects of final-state interactions are expected to be fully taken into account. Point (3) is probably most significant in our approach. We mean by this point that the photoabsorption cross section is calculated in two independent methods. In the MRM we calculate the cross sections for the radiative capture reactions, HHe and HeHe, and these cross sections are converted to the photoabsorption cross section using a formula due to the detailed balance. In the CSM we make use of the fact that the final continuum states of He, if rotated on the complex coordinate plane, can be expanded in the functions. Consistency of the two results, if attained, serves strong evidence for that the obtained cross section is reliable. We hope to shed light on resolving the controversy from our theoretical input.
In Sec. II we present our theoretical prescriptions to calculate the photoabsorption cross section. The two approaches, the CSM and the MRM, are explained in this section with emphasis on the method of how discretized states are employed for the continuum problem. We give the basic inputs of our calculation in Sec. III. The detail of our correlated basis functions is given in Sec. III.2, and various configurations needed to take into account the final state interactions as well as two- and three-body decay channels are explained in Sec. III.4. We show results on the photoabsorption cross section in Sec. IV. The strength function and the transition densities calculated from the continuum discretized states are presented in Sec. IV.1. A comparison of CSM and MRM cross sections is made in Sec. IV.2. The photonuclear sum rules are examined in Sec. IV.3. The calculated photoabsorption cross sections are compared to experiment in Sec. IV.4. Finally we draw conclusions of this work in Sec. V.
Ii Formulation of photoabsorption cross section calculation
ii.1 Basic formula
The photoabsorption takes place mainly through the transition, which can be treated by the perturbation theory. The wavelength of the photon energy (MeV) is about (fm), so that it is long enough compared to the radius of He even when is close to the rest energy of a pion. The photoabsorption cross section can be calculated by the formula ring ()
where is the strength function for the transition
The symbol denotes the operator, and and are the wave functions of the ground state with energy and the final state with the excitation energy of He, respectively. The recoil energy of He is ignored, so that is equal to the nuclear excitation energy . The symbol indicates a summation over and all possible final states . The final state of He is actually a continuum state lying above the H+ threshold, and it is normalized according to . The sum or integral for the final states in can be taken using the closure relation, leading to a well-known expression for the strength function
where a positive infinitesimal ensures the outgoing wave after the excitation of He. A method of calculation of in the CSM is presented in Sec. II.2.
A partial photoabsorption cross section for the two-body final state comprising nuclei, A and B, can be calculated in another way. With use of the detailed balance the cross section is related to that of its inverse process, the radiative capture cross section thompson09 (), induced by the transition, at the incident energy ,
where is the A+B threshold energy. Here and are the angular momenta of the nuclei, and , and is the angular momentum of the ground state of He. The wave number is where is the reduced mass of the two nuclei and is the photon wave number . The photoabsorption cross section is equal to a sum of and provided that three- and four-body breakup contributions are negligible. A calculation of the radiative capture cross section will be performed in the MRM as explained in Sec. II.3.
The fact that we have two independent methods of calculating is quite important to assess their validity.
ii.2 Complex scaling method
The quantity of Eq. (3) is evaluated using the CSM, which makes a continuum state that has an outgoing wave in the asymptotic region damp at large distances, thus enabling us to avoid an explicit construction of the continuum state. In the CSM the single particle coordinate and momentum are subject to a rotation by an angle :
Applying this transformation in Eq. (3) leads to
where is the complex scaled resolvent
A key point in the CSM is that within a suitable range of positive the eigenvalue problem
can be solved in a set of basis functions
We are interested in with . With the solution of Eq. (9), an expression for reads myo (); threebody ()
Note that the energy of the bound state of in principle remains the same against the scaling angle . Also is to be understood as a solution of Eq. (9) for corresponding to the ground-state energy threebody (). This stability condition will be met when the basis functions are chosen sufficiently.
In such a case where sharp resonances exist, the angle has to be rotated to cover their resonance poles on the complex energy plane moiseyev (); CSM (). A choice of is made by examining the stability of with respect to the angle. One of the advantages of the CSM is that one needs no artificial energy smoothing procedure but obtains the continuous cross section naturally.
ii.3 Microscopic -matrix method
The calculation of involves the matrix element of between the scattering state initiated through the A+B entrance channel and the final state, i.e., the ground state of He. See, e.g., Ref. arai02 (). The scattering problem is solved in the MRM. As is discussed in detail for the four-nucleon scattering arai10 (); fbaoyama (), an accurate solution for the scattering problem with realistic potentials in general requires a full account of couplings of various channels. In the present study we include the following two-body channels: H()+, He()+, (1)+(1), (0)+(0), and (0)+(0). Here, for example, H() stands for not only the ground state of H but also its excited states. The latter are actually unbound, and these configurations together with the ground-state wave function are obtained by diagonalizing the intrinsic Hamiltonian for the ++ system in basis functions. Similarly (0), (0), and (0) stand for the two-nucleon pseudo states with the isospin .
The total wave function may be expressed in terms of a combination of various components, , with
where e.g. is the basis size for the nucleus A, is the intrinsic wave function of its th state with the angular momentum and the parity , and is the relative motion function between the two nuclei. The angular momenta of the two nuclei are coupled to the channel spin , which is further coupled with the partial wave for the relative motion to the total angular momentum . The index denotes a set of (, , , ). The parity of the total wave function is .
In the MRM the configuration space is divided into two regions, internal and external, by a channel radius. The total wave function in the internal region, , is constructed by expanding in terms of with a suitable set of , while the total wave function in the external region, , is represented by expressing with Coulomb or Whitakker functions depending on whether the channel is open or not. The scattering wave function and the -matrix are determined by solving a Schrödinger equation
in the internal region together with the continuity condition at the channel radius. Here is the Bloch operator. See Ref. desc () for detail.
In the MRM the ground-state wave function of He is approximated in combinations of the multi-channel configurations.
Iii Model
iii.1 Hamiltonian
The Hamiltonian we use reads
The kinetic energy of the center of mass motion is subtracted and the two-nucleon interaction consists of nuclear and Coulomb parts. As the potential we employ Argonne 8 (AV8AV8p () and G3RS tamagaki () potentials that contain central, tensor and spin-orbit components. The and terms in the G3RS potential are omitted. The potential of AV8 type contains eight pieces: =, where and are the radial form factor and the operator characterizing each piece of the potential. The operators are defined as =1, ====, ===, where is the tensor operator, and is the spin-orbit operator. For the sake of later convenience, we define by
The AV8 potential is more repulsive at short distances and has a stronger tensor component than the G3RS potential. Due to this property one has to perform calculations of high accuracy particularly when the AV8 potential is used, in order to be safe from those problems of the CSM that are raised by Witała and Glöckle witala (). To reproduce the two- and three-body threshold energies is vital for a realistic calculation of . To this end we add a three-nucleon force (3NF) , and adopt a purely phenomenological potential hiyama () that is determined to fit the inelastic electron form factor from the ground state to the first excited state of He as well as the binding energies of H, He and He.
iii.2 Gaussian basis functions
Basis functions defined here can apply to any number of nucleons. The basis function we use for -nucleon system takes a general form in coupling scheme
where is the antisymmetrizer. We define spin functions by a successive coupling of each spin function
Since taking all possible intermediate spins (, ) forms a complete set for a given , any spin function can be expanded in terms of the functions (19). Similarly the isospin function can also be expanded using a set of isospin functions . In the MRM calculation we use a particle basis that in general contains a mixing of the total isospin , which is caused by the Coulomb potential.
There is no complete set that is flexible enough to describe the spatial part . For example, harmonic-oscillator functions are quite inconvenient to describe spatially extended configurations. We use an expansion in terms of correlated Gaussians varga (); svm (). As demonstrated in Ref. kamada (), the Gaussian basis leads to accurate solutions for few-body bound states interacting with the realistic potentials.
Two types of Gaussians are used. One is a basis expressed in a partial wave expansion
Here the coordinates are a set of relative coordinates. The angular part is represented by successively coupling the partial wave associated with each coordinate. The values of and as well as the intermediate angular momenta are variational parameters. The angular momentum is limited to in the present calculation. This basis is employed to construct the internal wave function of the MRM calculation.
The other is an explicitly correlated Gaussian with a global vector representation varga (); svm (); dgvr (); fbaoyama ()
where is an positive definite symmetric matrix and is an -dimensional column vector. Both and are variational parameters. The tilde symbol denotes a transpose, that is, and . The latter specifies the global vector responsible for the rotation. The basis function (22) will be used in the CSM calculation. Actually a choice of the angular part of Eq. (22) is here restricted to . With the two global vectors any states but can be constructed with a suitable choice of and .
Apparently the basis function (22) includes correlations among the nucleons through the non-vanishing off diagonal elements of . Contrary to this, the basis function (20) takes a product form of a function depending on each coordinate, so that the correlation is usually accounted for by including the so-called rearrangement channels that are described with different coordinate sets kamimura (). A great advantage of Eq. (22) is that it keeps its functional form under the coordinate transformation. Hence one needs no such rearrangement channels but can use just one particular coordinate set, which enables us to calculate Hamiltonian matrix elements in a unified way. See Refs. dgvr (); fbaoyama () for details.
The variational parameters are determined by the stochastic variational method varga (); svm (). It is confirmed that both types of basis functions produce accurate results for the ground-state properties of H, He, and He dgvr (). Table 1 lists the properties of H and He obtained using the basis (22). Included and values are the same as those used in Refs. dgvr (); inversion (). Both potentials of AV8+3NF and G3RS+3NF reproduce the binding energy and the root-mean square radius of He satisfactorily. The G3RS+3NF potential gives a slightly larger radius and a smaller -state probability than the AV8+3NF potential.
H He H He
(MeV) 8.41 28.43 8.35 28.56
(fm) 1.70 1.45 1.74 1.47
(fm) 2.41 2.45
91.25 85.56 92.85 88.33
8.68 14.07 7.10 11.42
0.07 0.37 0.05 0.25
Table 1: Ground-state properties of H and He calculated with the correlated Gaussians (22) using the AV8 and G3RS potentials together with 3NF. Here , , and denote the energy, the root-mean-square radius of proton distribution and the root-mean-square relative distance of protons, respectively, and stands for the probability (in %) of finding the component with the total orbital angular momentum and the spin . The experimental energy of He is MeV and the point proton radius is 1.457(14) fm mueller ().
iii.3 Two- and three-body decay channels
As is well-known, the electric dipole operator
is an isovector, where is the center of mass coordinate of He, and is the Jacobi coordinate: , , . This operator excites the ground state of He to those states that have = in so far as a small isospin admixture in the ground state of He is ignored. Moreover those excited states should mainly have component, because the ground state of He is dominated by the component. See Table 1. Excited states with or 2 components will be weakly populated by the transition through the minor components (12–14%) of the He ground state.
According to the -matrix phenomenology as quoted in Ref. tilley (), two levels with are identified. Their excitation energies and widths in MeV are respectively =(23.64, 6.20), (25.95, 12.66). We have recently studied the level structure of He and succeeded to reproduce all the known levels below 26 MeV inversion (). With including the 3NF, two states are predicted at about 23 and 27 MeV in case of the AV potential. They are however not clearly identified as resonances in a recent microscopic scattering calculation fbaoyama (). In Sec. IV.1, we will show that three states with strong strength are obtained below 35 MeV in a diagonalization using the basis and will discuss the properties of those states.
Low-lying excited states with decay to H+ and He+ channels with wave. Possible channel spins that the H+ or He+ continuum state takes are and fbaoyama (). A main component of the continuum state is found to be while that of the continuum state is . Thus it is expected that the excitation of He is followed mainly by the H+ and He+ decays in the channel, which agrees with the result of a resonating group method calculation including the H+, He+, and + physical channels wachter ().
The two-body decay to + is suppressed due to the isospin conservation. Above the ++ threshold at 26.07 MeV, this three-body decay becomes possible where the decaying pair is in the state. In fact the cross section to this three-body decay is observed experimentally shima ().
iii.4 Square-integrable basis with
The accuracy of the CSM calculation crucially depends on how sufficiently the basis functions for are prepared for solving the eigenvalue problem (9). We attempt at constructing the basis paying attention to two points: the sum rule of strength and the decay channels as discussed in Sec. III.3. As the operator (23) suggests, we will construct the basis with by choosing the following three operators and acting them on the basis functions that constitute the ground state of He: (i) a single-particle excitation built with , (ii) a + (H+ and He+) two-body disintegration due to , (iii) a ++ three-body disintegration due to . See Fig. 1. The basis (i) is useful for satisfying the sum rule, and the bases (ii) and (iii) take care of the two- and three-body decay asymptotics. These cluster configurations will be better described using the relevant coordinates rather than the single-particle coordinate. It should be noted that the classification label does not necessarily indicate strictly exclusive meanings because the basis functions belonging to the different classes have some overlap among others because of their non-orthogonality.
Figure 1: (Color online) Three patterns for the dipole excitations for He. Thick solid lines denote the coordinates on which the spatial part of the operator acts.
We will slightly truncate the ground-state wave functions of H, He, and He when they are needed to construct the above configurations, (i) and (ii). With this truncation a full calculation presented in Sec. IV will be possible without excessive computer time. As shown in Table 1, the ground states of these nuclei contain a small amount (less than 0.5%) of component, so that we omit this component and reconstruct the ground-state wave functions using only with =0, 2 and in Eq. (22). The energy loss is found to be small compared to the accurate energy of Table 1. E.g., in the case of AV8+3NF, the loss is 0.23 MeV for H in 64 basis dimension and 1.53 MeV for He in 200 basis dimension. The truncated ground-state wave function is denoted for and for He.
Note, however, that we use the accurate wave function of Table 1 for the He ground state in computing with Eq. (11).
iii.4.1 Single-particle (sp) excitation
As is well-known, applying the operator on a ground state leads to a coherent state that exhausts all the strength from the ground state. The coherent state is however not an eigenstate of the Hamiltonian. In analogy to this, the basis of type (i) is constructed as follows
where is the space-spin part of the th basis function of . We include all the basis functions and all possible for the four-nucleon isospin state with =10. The truncated basis consists of either or in the notation of Eq. (18). The former contains no global vector, while the latter contains one global vector. Since is rewritten as with , the basis (24) contains at most two global vectors and reduces to the correlated Gaussian (22). For example, the basis with the latter case can be reduced, after the angular momentum recoupling, to the standard form with
Each component of is included as an independent basis function in what follows.
iii.4.2 + two-body disintegration
In this basis the nucleon couples with the ground and pseudo states of the system. Their relative motion carries -wave excitations, and it is described in a combination of several Gaussians. The basis function takes the following form
where is the space-spin part of the th basis function of . The value of takes and , and takes any of , and that, with , can add up to the angular momentum 1. The parameter is taken in a geometric progression as in fm. As in the basis of the single-particle excitation the space-spin part is again expressed in the correlated Gaussians (22) with at most two global vectors, where one of the global vectors is = with . All the basis states with different values of and are included independently.
iii.4.3 ++ three-body disintegration
In this basis the relative motion between and is wave but the system is excited to the + configuration with -wave relative motion. Here does not necessarily mean its ground state but include pseudo states with the angular momentum . The spatial part is however taken from the basis functions of the deuteron ground state. The three-body basis function takes the following form
where is the (pseudo) deuteron wave function mentioned above. Both of and take and . All possible sets of and values that satisfy the angular momentum addition rule are included in the calculation. Both and are again given in a geometric progression, in fm. Note that = with . After recoupling the orbital and spin angular momenta, the basis (27) leads to the following space-spin parts: with =0 or 2, and all possible values of are allowed. These are included independently. Note that the matrix of becomes diagonal.
The basis dimension included is 7400 (7760) for AV8 (G3RS)+3NF, 1200 (1560) from (i), 3000 from (ii), and 3200 from (iii), respectively.
Iv Results
iv.1 Discretized strength of electric dipole transition
Continuum states with are discretized by diagonalizing the Hamiltonian in the basis functions defined in Sec. III. These discretized states provide us with an approximate distribution of the strength. Figure 2 displays the reduced transition probability
as a function of the discretized energy . The calculations were performed in each basis set of (i)–(iii) as well as a full basis that includes all of them. The distribution of depends rather weakly on the potentials.
As expected, three types of basis functions play a distinctive and supplementary role in the strength distribution. The basis functions (i) produce strongly concentrated strength at about 27 MeV and another peak above 40 MeV. The component of these states is about 95%. With the + two-body configurations (ii), we obtain two peaks in the region of 20–30 MeV and one or two peaks at around 35 MeV. The two peaks at about 25 MeV may perhaps correspond to the levels with =1 at 23.64 and 25.95 MeV with very broad widths tilley (). Note, however, that a microscopic four-nucleon scattering calculation presents no conspicuous resonant phase shifts for and channels fbaoyama (). The three-body configurations (iii) give relatively small strength broadly in the excitation energy above 30 MeV. The three prominent peaks at around 25–35 MeV remain to exist in the full basis calculation. This implies that the low-lying strength mainly comes from the + configuration. We will return this issue in Sec. IV.4 The three discretized states are labeled by their excitation energies in what follows.
Figure 2: (Color online) Discretized strength of the transitions in He. See the text for the calculations classified by sp, +, ++, and Full.
Table 2 shows the properties of the three states that have strong strength. The expectation value of each piece of the Hamiltonian is a measure of its contribution to the energy. We see that the central (: ) and tensor (: ) terms are major contributors among the interaction pieces. The one-pion exchange potential (OPEP) consists of and terms, so that the tensor force of the OPEP is found to play a vital role. The value of in the table is obtained by the squared coefficient of the expansion
where is normalized. Note that no basis functions with are included in the present calculation as they are not expressible in the two global vectors. As expected, all of the three states dominantly consist of the component, which can be excited, by the operator, from the main component of the He ground state. We see a considerable admixture of the components especially with in the three states. This is understood from the role played by the tensor force that couples the and 2 states. In fact the states lose energy due to large kinetic energy contributions but gain energy owing to the coupling with the main component with through the tensor force. For example, in the case of state, the diagonal matrix elements of the kinetic energy, , are 196.5 (160.3), 198.6 (161.5), 199.3 (162.3) MeV for =1, 2, 3 with AV8 (G3RS)+3NF, while the tensor coupling matrix elements between and states, , are respectively 54.5 (40.2), 70.8 (52.1), 84.0 (61.9) MeV for =1, 2, 3 states.
23.96 27.05 33.02 24.08 27.25 33.43
4.46 1.38 4.60 4.48 1.31 4.88
51.21 54.78 43.71 44.34 48.37 49.65
6.42 6.37 4.44 0.14 0.24 0.31
3.41 3.68 1.61 3.07 3.38 2.94
2.17 2.15 1.65 3.81 3.75 3.43
23.83 24.04 16.09 20.45 20.81 18.46
0.22 0.22 0.14 0.41 0.41 0.37
30.60 30.51 22.71 20.60 20.64 18.80
4.79 4.77 3.55 2.33 2.33 2.13
6.76 6.73 4.96 2.37 2.38 2.15
0.74 0.86 0.55 0.72 0.85 0.85
0.42 0.45 0.32 0.41 0.45 0.42
87.18 84.58 82.70 90.12 88.47 79.73
4.76 7.47 7.59 3.18 4.89 13.86
0.16 0.25 0.22 0.09 0.13 0.36
0.89 0.74 4.56 0.85 0.76 0.95
2.17 1.99 1.40 1.89 1.79 1.41
4.85 4.97 3.53 3.86 3.95 3.69
Table 2: Properties of the three states that exhibit strong strength. The excitation energy and the expectation values are given in units of MeV. The value of is given in %. See Table 1 for the ground-state energy of He.
The transition density is defined as
which gives the transition matrix element through
Figure 3 displays the transition densities for the three states of Table 2 that give the large matrix elements. The dependence of the transition density on the interaction is rather weak except for the third state labeled by . The transition density extends to significantly large distances mainly due to the effect of the + configurations, so that for a reliable evaluation of the basis functions for must include configurations that reach far distances. The peak of appears at about 2 fm, which is much larger than the peak position (1.1 fm) of , where is the ground-state density of He. A comparison of the transition densities of the second () and third () states suggests that near 2-6 fm they exhibit a constructive pattern in the second state and a destructive pattern in the third state.
Figure 3: (Color online) Transition densities for the three discretized states listed in Table 2 that have strong strength.
iv.2 Test of CSM calculation
The strength function (11) calculated in the CSM using the full basis is plotted in Fig. 4 for some angles . Both AV8+3NF and G3RS+3NF potentials give similar results. With =10, shows some oscillations whose peaks appear at the energies of the discretized states shown in the full calculation of Fig. 2. To understand this behavior we note that the contribution of an eigenstate to is given by a Lorentz distribution |
498ee69715b4cec1 | Mathematical Colloquium: Localization of interacting quantum particles with quasi-random disorder
Vieri Mastropietro
Universita’ di Milano
It is well established at a mathematical level that disorder can produce Anderson localization of the eigenvectors of the single particle Schrödinger equation. Does localization survive in presence of many body interaction? A positive answer to such question would have important physical consequences, related to lack of thermalization in closed quantum systems. Mathematical results on such issue are still rare and a full understanding is a challenging problem. We present an example in which localization can be proved for the ground state of an interacting system of fermionic particles with quasi random Aubry-Andre' potential. The Hamiltonian is given by $N$ coupled almost-Mathieu Schrödinger operators. By assuming Diophantine conditions on the frequency and density, we can establish exponential decay of the ground state correlations. The proof combines methods coming from the direct proof of convergence of KAM Lindstedt series with Renormalization Group methods for many body systems. Small divisors appear in the expansions, whose convergence follows exploiting the Diophantine conditions and fermionic cancellations. The main difficulty comes from the presence of loop graphs, which are the signature of many body interaction and are absent in KAM series. V.Mastropietro. Comm Math Phys 342, 217 (2016); Phys Rev Lett 115, 180401 (2015); Comm. Math. Phys. (2017)
Sign in |
cd78866de144406a | 2007 Schools Wikipedia Selection. Related subjects: Chemical elements
1 (none)hydrogenhelium
Periodic Table - Extended Periodic Table
Name, Symbol, Number hydrogen, H, 1
Chemical series nonmetals
Group, Period, Block 1, 1, s
Appearance colorless
Atomic mass 1.00794 (7) g/mol
Electron configuration 1s1
Electrons per shell 1
Physical properties
Phase gas
Density (0 °C, 101.325 kPa)
0.08988 g/L
Melting point 14.01 K
(−259.14 ° C, −434.45 ° F)
Boiling point 20.28 K
(−252.87 ° C, −423.17 ° F)
Triple point 13.8033 K, 7.042 kPa
Critical point 32.97 K, 1.293 MPa
Heat of fusion (H2) 0.117 kJ·mol−1
Heat of vaporization (H2) 0.904 kJ·mol−1
Heat capacity (25 °C) (H2)
28.836 J·mol−1·K−1
Vapor pressure
P/Pa 1 10 100 1 k 10 k 100 k
at T/K 15 20
Atomic properties
Crystal structure hexagonal
Oxidation states 1, −1
( amphoteric oxide)
Electronegativity 2.20 (Pauling scale)
Ionization energies 1st: 1312.0 kJ/mol
Atomic radius 25 pm
Atomic radius (calc.) 53 pm ( Bohr radius)
Covalent radius 37 pm
Van der Waals radius 120 pm
Thermal conductivity (300 K) 180.5 mW·m−1·K−1
Speed of sound (gas, 27 °C) 1310 m/s
CAS registry number 1333-74-0
Selected isotopes
Main article: Isotopes of hydrogen
iso NA half-life DM DE ( MeV) DP
1H 99.985% H is stable with 0 neutrons
2H 0.0115% H is stable with 1 neutron
3H trace 12.32 y β 0.019 3He
Hydrogen ( IPA: /ˈhaɪdrə(ʊ)dʒən/, Latin: 'hydrogenium', from Ancient Greek ὕδωρ (hudor): "water" and Ancient Greek γείνομαι (geinomai): "to beget or sire") is a chemical element that, in the periodic table, has the symbol H and an atomic number of 1. At standard temperature and pressure it is a colorless, odorless, nonmetallic, tasteless, highly flammable diatomic gas (H2). With an atomic mass of 1.00794 g/ mol, hydrogen is the lightest element. It is also the most abundant, constituting roughly 75% of the universe's elemental mass. Stars in the main sequence are mainly composed of hydrogen in its plasma state. Elemental hydrogen is relatively rare on Earth, and is industrially produced from hydrocarbons, after which most free hydrogen is used "captively" (meaning locally at the production site), with the largest markets about equally divided between fossil fuel upgrading (e.g., hydrocracking) and in ammonia production (mostly for the fertilizer market). However, hydrogen can easily be produced from water using the process of electrolysis.
The most common naturally occurring isotope of hydrogen has a single proton and no neutrons. In ionic compounds it can take on either a positive charge (becoming a cation composed of a bare proton) or a negative charge (becoming an anion known as a hydride). Hydrogen can form compounds with most elements and is present in water and most organic compounds. It plays a particularly important role in acid-base chemistry, in which many reactions involve the exchange of protons between soluble molecules. As the only neutral atom for which the Schrödinger equation can be solved analytically, study of the energetics and bonding of the hydrogen atom has played a key role in the development of quantum mechanics.
The word "hydrogen" has several different meanings:
1. the name of an element.
2. an atom, sometimes called "H dot", that is abundant in space but essentially absent on earth, because it dimerizes.
3. a diatomic molecule that occurs naturally in trace amounts in the Earth's atmosphere; chemists increasingly refer to H2 as dihydrogen to distinguish this molecule from atomic hydrogen and hydrogen found in other compounds.
4. the atomic constituent within all organic compounds, water, and many other chemical compounds.
The elemental forms of hydrogen should not be confused with hydrogen as it appears in chemical compounds.
Discovery of H2
Hydrogen gas, H2, was first artificially produced and formally described by T. von Hohenheim (also known as Paracelsus, 1493– 1541) via the mixing of metals with strong acids. He was unaware that the flammable gas produced by this chemical reaction was a new chemical element. In 1671, Robert Boyle rediscovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by identifying the gas from a metal-acid reaction as "inflammable air", and further finding that the gas produces water when burned. Cavendish had stumbled on hydrogen when experimenting with acids and mercury. Although he wrongly assumed that hydrogen was a liberated component of the mercury rather than the acid, he was still able to accurately describe several key properties of hydrogen. He is usually given credit for its discovery as an element. In 1783, Antoine Lavoisier gave the element the name of hydrogen when he (with Laplace) reproduced Cavendish's finding that water is produced when hydrogen is burned. Lavoisier's name for the gas won out.
One of the first uses of H2 was for balloons. The H2 was obtained by reacting sulphuric acid and metallic iron. Infamously, H2 was used in the Hindenburg airship that was destroyed in a midair fire.
Role in history of quantum theory
Because of its relatively simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure. Furthermore, the corresponding simplicity of the hydrogen molecule and the corresponding cation H2+ allowed fuller understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment of the hydrogen atom had been developed in the mid-1920s.
One of the first quantum effects to be explicitly noticed (but not understood at the time) was Maxwell's observation, half a century before full quantum mechanical theory arrived. He observed that the specific heat capacity of H2 unaccountably departs from that of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory, this behaviour arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in H2 because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect.
Natural occurrence
Hydrogen is the most abundant element in the universe, making up 75% of normal matter by mass and over 90% by number of atoms. This element is found in great abundance in stars and gas giant planets. Molecular clouds of H2 are associated with star formation. Hydrogen plays a vital role in powering stars through proton-proton reaction nuclear fusion.
Throughout the universe, hydrogen is mostly found in the atomic and plasma states whose properties are quite different from molecular hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. Hydrogen is found in the neutral atomic state in the Interstellar medium. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the Universe up to redshift z=4.
Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, H2 (for data see table). However, hydrogen gas is very rare in the Earth's atmosphere (1 ppm by volume) because of its light weight, which enables it to escape from Earth's gravity more easily than heavier gases. Although H atoms and H2 molecules are abundant in interstellar space, they are difficult to generate, concentrate, and purify on Earth. Most of the Earth's hydrogen is in the form of chemical compounds such as hydrocarbons and water. Hydrogen gas is produced by some bacteria and algae and is a natural component of flatus. Methane is a hydrogen source of increasing importance.
The hydrogen atom
Electron energy levels
Depiction of a hydrogen-1 atom, or protium, showing the Van der Waals radius and the proton nucleus
The ground state energy level of the electron in a hydrogen atom is 13.6 eV, which is equivalent to an ultraviolet photon of roughly 92 nm.
The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, which conceptualizes the electron as "orbiting" the proton in analogy to the Earth's orbit of the sun. However, the electromagnetic force attracts electrons and protons to one another, while planets and celestial objects are attracted to each other by gravity. Because of the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies. A more accurate description of the hydrogen atom comes from a purely quantum mechanical treatment that uses the Schrödinger equation to calculate the probability density of the electron around the proton. Treating the electron as a matter wave reproduces chemical results such as shape of the hydrogen atom more naturally than the particle-based Bohr model, although the energy and spectral results are the same. Modeling the system fully using the reduced mass of nucleus and electron (as one would do in the two-body problem in celestial mechanics) yields an even better formula for the hydrogen spectra, and also the correct spectral shifts for the isotopes deuterium and tritium. Very small adjustments in energy levels in the hydrogen atom, which correspond to actual spectral effects, may be determined by using a full quantum mechanical theory which corrects for the effects of special relativity (see Dirac equation), and by accounting for quantum effects arising from production of virtual particles in the vacuum and as a result of electric fields (see quantum electrodynamics).
In hydrogen gas, the electronic ground state energy level is split into hyperfine structure levels because of magnetic effects of the quantum mechanical spin of the electron and proton. The energy of the atom when the proton and electron spins are aligned is higher than when they are not aligned. The transition between these two states can occur through emission of a photon through a magnetic dipole transition. Radio telescopes can detect the radiation produced in this process, which is used to map the distribution of hydrogen in the galaxy.
Protium, the most common isotope of hydrogen, has one proton and one electron. Unique among all stable isotopes, it has no neutrons. (see diproton for discussion of why others do not exist)
Hydrogen has three naturally occurring isotopes, denoted 1H, 2H, and 3H. Other, highly unstable nuclei (4H to 7H) have been synthesized in the laboratory but not observed in nature.
• 2H, the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in its nucleus. Deuterium comprises 0.0026–0.0184% of all hydrogen on Earth. It is not radioactive, and does not represent a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for 1H- NMR spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion.
• 3H is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decays through beta decay with a half-life of 12.32 years. Small amounts of tritium occur naturally because of the interaction of cosmic rays with atmospheric gases; tritium has also been released during nuclear weapons tests. It is used in nuclear fusion reactions, as a tracer in isotope geochemistry, and specialized in self-powered lighting devices. Tritium was once routinely used in chemical and biological labeling experiments as a radiolabel (this has become less common).
Hydrogen is the only element that has different names for its isotopes in common use today. (During the early study of radioactivity, various heavy radioactive isotopes were given names, but such names are no longer used). The symbols D and T (instead of 2H and 3H) are sometimes used for deuterium and tritium, but the corresponding symbol P is already in use for phosphorus and thus is not available for protium. IUPAC states that while this use is common it is not preferred.
Elemental molecular forms
First tracks observed in liquid hydrogen bubble chamber
First tracks observed in liquid hydrogen bubble chamber
There are two different types of diatomic hydrogen molecules that differ by the relative spin of their nuclei. In the orthohydrogen form, the spins of the two protons are parallel and form a triplet state; in the parahydrogen form the spins are antiparallel and form a singlet. At standard temperature and pressure, hydrogen gas contains about 25% of the para form and 75% of the ortho form, also known as the "normal form". The equilibrium ratio of orthohydrogen to parahydrogen depends on temperature, but since the ortho form is an excited state and has a higher energy than the para form, it is unstable and cannot be purified. At very low temperatures, the equilibrium state is composed almost exclusively of the para form. The physical properties of pure parahydrogen differ slightly from those of the normal form. The ortho/para distinction also occurs in other hydrogen-containing molecules or functional groups, such as water and methylene.
The uncatalyzed interconversion between para and ortho H2 increases with increasing temperature; thus rapidly condensed H2 contains large quantities of the high-energy ortho form that convert to the para form very slowly. The ortho/para ratio in condensed H2 is an important consideration in the preparation and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces enough heat to evaporate the hydrogen liquid, leading to loss of the liquefied material. Catalysts for the ortho-para interconversion, such as iron compounds, are used during hydrogen cooling.
Chemical and physical properties
The solubility and adsorption characteristics of hydrogen with various metals are very important in metallurgy (as many metals can suffer hydrogen embrittlement) and in developing safe ways to store it for use as a fuel. Hydrogen is highly soluble in many compounds composed of rare earth metals and transition metals and can be dissolved in both crystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the metal crystal lattice.
Hydrogen can combust rapidly in air, and was blamed for the disaster with Hindenburg on May 6, 1937
Hydrogen gas is highly flammable and will burn at concentrations as low as 4% H2 in air. The enthalpy of combustion for hydrogen is –286 kJ/mol; it combusts according to the following balanced equation.
2 H2(g) + O2(g) → 2 H2O(l) + 572 kJ
When mixed with oxygen across a wide range of proportions, hydrogen explodes upon ignition. Hydrogen burns violently in air. Hydrogen-oxygen flames are nearly invisible to the naked eye, as illustrated by the faintness of flame from the main Space Shuttle engines (as opposed to the easily visible flames from the shuttle boosters). Thus it is difficult to visually detect if a hydrogen leak is burning. The Hindenburg zeppelin flames seen in the adjacent picture are from the covering skin of the zeppelin which contained carbon and pyrophoric aluminium powder that may have started the fire. Another characteristic of hydrogen fires is that the flames tend to ascend rapidly with the gas in air, causing less damage than hydrocarbon fires. Two-thirds of the Hindenburg passengers survived and deaths were from falling or from gasoline burns.
H2 reacts directly with other oxidizing elements. A violent and spontaneous reaction can occur at room temperature with chlorine and fluorine, forming the corresponding hydrogen halides, hydrogen chloride and hydrogen fluoride.
Covalent and organic compounds
While H2 is not very reactive under standard conditions, it does form compounds with most elements. Millions of hydrocarbons are known, but they are not formed by the direct reaction of elementary hydrogen and carbon. Hydrogen can form compounds with elements that are more electronegative, such as halogens (e.g., F, Cl, Br, I) and chalcogens (O, S, Se); in these compounds hydrogen takes on a partial positive charge. When bonded to fluorine, oxygen, or nitrogen, hydrogen can participate in a form of strong noncovalent bonding called hydrogen bonding, which is critical to the stability of many biological molecules. Hydrogen also forms compounds with less electronegative elements, such as the metals and metalloids, in which it takes on a partial negative charge. These compounds are often known as hydrides.
Hydrogen forms a vast array of compounds with carbon. Because of their general association with living things, these compounds came to be called organic compounds; the study of their properties is known as organic chemistry and their study in the context of living organisms is known as biochemistry. By some definitions, "organic" compounds are only required to contain carbon (as a classic historical example, urea). However, most of them also contain hydrogen, and since it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry. (This latter definition is not perfect, however, as in this definition urea would not be included as an organic compound).
In inorganic chemistry, hydrides can also serve as bridging ligands that link two metal centers in a coordination complex. This function is particularly common in group 13 elements, especially in boranes (boron hydrides) and aluminium complexes, as well as in clustered carboranes.
Compounds of hydrogen are often called hydrides, a term that is used fairly loosely. To chemists, the term "hydride" usually implies that the H atom has acquired a negative or anionic character, denoted H. The existence of the hydride anion, suggested by G.N. Lewis in 1916 for group I and II salt-like hydrides, was demonstated by Moers in 1920 with the electrolysis of molten lithium hydride (LiH), that produced a stoichiometric quantity of hydrogen at the anode. For hydrides other than group I and II metals, the term is quite misleading, considering the low electronegativity of hydrogen. An exception in group II hydrides is BeH2, which is polymeric. In lithium aluminium hydride, the AlH4 anion carries hydridic centers firmly attached to the Al(III). Although hydrides can be formed with almost all main-group elements, the number and combination of possible compounds varies widely; for example, there are over 100 binary borane hydrides known, but only one binary aluminium hydride. Binary indium hydride has not yet been identified, although larger complexes exist.
"Protons" and acids
Oxidation of H2 formally gives the proton, H+. This species is central to discussion of acids, though the term proton is used loosely to refer to positively charged or cationic hydrogen, denoted H+. A bare proton H+ cannot exist in solution because of its strong tendency to attach itself to atoms or molecules with electrons. To avoid the convenient fiction of the naked "solvated proton" in solution, acidic aqueous solutions are sometimes considered to contain the hydronium ion (H3O+) organized into clusters to form H9O4+. Other oxonium ions are found when water is in solution with other solvents.
Although exotic on earth, one of the most common ions in the universe is the H3+ ion, known as protonated molecular hydrogen or the triatomic hydrogen cation.
Laboratory syntheses
In the laboratory, H2 is usually prepared by the reaction of acids on metals such as zinc.
Zn + 2 H+ → Zn2+ + H2
Aluminium produces H2 upon treatment with acids but also with base:
The electrolysis of water is a simple method of producing hydrogen, although the resulting hydrogen necessarily has less energy content than was required to produce it. A low voltage current is run through the water, and gaseous oxygen forms at the anode while gaseous hydrogen forms at the cathode. Typically the cathode is made from platinum or another inert metal when producing hydrogen for storage. If, however, the gas is to be burnt on site, oxygen is desirable to assist the combustion, and so both electrodes would be made from inert metals. (Iron, for instance, would oxidize, and thus decrease the amount of oxygen given off.) The theoretical maximum efficiency (electricity used vs. energetic value of hydrogen produced) is between 80–94%. Bellona Report on Hydrogen
2H2O(aq) → 2H2(g) + O2(g)
Industrial syntheses
Hydrogen can be prepared in several different ways but the economically most important processes involve removal of hydrogen from hydrocarbons. Commercial bulk hydrogen is usually produced by the steam reforming of natural gas. At high temperatures (700–1100 °C; 1,300–2,000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and H2.
CH4 + H2O CO + 3 H2
This reaction is favored at low pressures but is nonetheless conducted at high pressures (20 atm; 600 inHg) since high pressure H2 is the most marketable product. The product mixture is known as " synthesis gas" because it is often used directly for the production of methanol and related compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon:
CH4 → C + 2 H2
Consequently, steam reforming typically employs an excess of H2O.
Additional hydrogen from steam reforming can be recovered from the carbon monoxide through the water gas shift reaction, especially with an iron oxide catalyst. This reaction is also a common industrial source of carbon dioxide:
CO + H2OCO2 + H2
Other important methods for H2 production include partial oxidation of hydrocarbons:
CH4 + 0.5 O2 CO + 2 H2
and the coal reaction, which can serve as a prelude to the shift reaction above:
C + H2O CO + H2
NB. Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for the production of ammonia (the world's fifth most produced industrial compound), hydrogen is generated from natural gas.
Biological syntheses
H2 is a product of some types of anaerobic metabolism and is produced by several microorganisms, usually via reactions catalyzed by iron- or nickel-containing enzymes called hydrogenases. These enzymes catalyze the reversible redox reaction between H2 and its component two protons and two electrons. Evolution of hydrogen gas occurs in the transfer of reducing equivalents produced during pyruvate fermentation to water.
Water splitting, in which water is decomposed into its component protons, electrons, and oxygen, occurs in the light reactions in all photosynthetic organisms. Some such organisms — including the alga Chlamydomonas reinhardtii and cyanobacteria — have evolved a second step in the dark reactions in which protons and electrons are reduced to form H2 gas by specialized hydrogenases in the chloroplast. Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to efficiently synthesize H2 gas even in the presence of oxygen.
Other rarer but mechanistically interesting routes to H2 production also exist in nature. Nitrogenase produces approximately one equivalent of H2 for each equivalent of N2 reduced to ammonia. Some phosphatases reduce phosphite to H2.
Large quantities of H2 are needed in the petroleum and chemical industries. The largest application of H2 is for the processing ("upgrading") of fossil fuels, and in the production of ammonia. The key consumers of H2 in the petrochemical plant include hydrodealkylation, hydrodesulfurization, and hydrocracking. H2 has several other important uses. H2 is used as a hydrogenating agent, particularly in increasing the level of saturation of unsaturated fats and oils (found in items such as margarine), and in the production of methanol. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. H2 is also used as a reducing agent of metallic ores.
Apart from its use as a reactant, H2 has wide applications in physics and engineering. It is used as a shielding gas in welding methods such as atomic hydrogen welding. H2 is used as the rotor coolant in electrical generators at power stations, because it has the highest thermal conductivity of any gas. Liquid H2 is used in cryogenic research, including superconductivity studies. Since H2 is lighter than air, having a little more than 1/15th of the density of air, it was once widely used as a lifting agent in balloons and airships. However, this use was curtailed after the Hindenburg disaster convinced the public that the gas was too dangerous for this purpose.
Hydrogen's rarer isotopes also each have specific applications. Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons, and in nuclear fusion reactions. Deuterium compounds have applications in chemistry and biology in studies of reaction isotope effects. Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs, as an isotopic label in the biosciences, and as a radiation source in luminous paints.
Hydrogen as an energy carrier
Having been used as an ingredient in some rocket fuels for several decades, hydrogen, or more specifically H2, is now widely discussed in the context of energy. Hydrogen is not an energy source, since it is not an abundant natural resource and more energy is used to produce it than can be ultimately extracted from it. However, it could become useful as a carrier of energy, as elucidated in the United States Department of Energy's 2003 report, "Among the various alternative energy strategies, building an energy infrastructure that uses hydrogen — the third most abundant element on the earth's surface — as the primary carrier that connects a host of energy sources to diverse end uses may enable a secure and clean energy future for the Nation." The hydrogen would then locally be converted into usable energy either via combustion of fossil fuels or by electrochemical conversion into electricity in a fuel cell.
One theoretical advantage of using H2 as a carrier is the localization and concentration of environmentally unwelcome aspects of hydrogen manufacture. For example, CO2 sequestration could be conducted at the point of H2 production from methane. Hydrogen could also be produced using the electrolysis of water method; however, this is currently three to six times as expensive as production from natural gas. High-temperature electrolysis, which promises greater efficiency, is being investigated. Currently, however hydrogen production is expensive relative to other energy storage chemicals, and the bulk of hydrogen is now produced by the least expensive method, which (as noted) employs methane and which, as currently practiced, creates greenhouse gas emissions.
Retrieved from " http://en.wikipedia.org/wiki/Hydrogen" |
5c450f91921b97fd | The Full Wiki
Optics: Quiz
Did you know ...
More interesting facts on Optics
Include this on your site/blog:
Question 1: These effects are treated by the ________.
Fresnel equationsRefractive indexAnti-reflective coatingSpecular reflection
Question 2: Young's famous ________ showed that light followed the law of superposition, something normal particles do not follow.
Wave–particle dualityDouble-slit experimentQuantum mechanicsIntroduction to quantum mechanics
Question 3: For example, this is the case with macroscopic crystals of ________, which present the viewer with two offset, orthogonally polarized images of whatever is viewed through them.
CarbonCalcium carbonateAragoniteCalcite
Question 4: The earliest known working telescopes were ________, a type which relies entirely on lenses for magnification.
Optical telescopeLens (optics)Chromatic aberrationRefracting telescope
Question 5: Scattering off of ice crystals and other particles in the atmosphere are responsible for halos, afterglows, coronas, rays of sunlight, and ________.
Sun dogMarsSaturnVädersolstavlan
Question 6: Rainbows and ________ are examples of optical phenomena.
EarthSunMirageGreen flash
Question 7: Electronic ________, such as CCDs, exhibit shot noise corresponding to the statistics of individual photon events.
Digital single-lens reflex cameraNikon D3Canon EOS 5D Mark IIImage sensor
Question 8: A small proportion of light scattering from atoms or molecules may undergo ________, wherein the frequency changes due to excitation of the atoms and molecules.
Raman scatteringRaman spectroscopyResonance Raman spectroscopyPhoton
Question 9: Optical polarization is principally of importance in ________ due to circular dichroism and optical rotation ("circular birefringence") exhibited by optically active (chiral) molecules.
ElectrochemistryPeriodic tableChemistryInorganic chemistry
Question 10: This results in ________ and a decrease in the amplitude of the wave, which for light is associated with a dimming of the waveform at that location.
Schrödinger equationQuantum mechanicsInterference (wave propagation)Introduction to quantum mechanics
Got something to say? Make a comment.
Your name
Your email address |
57b57d85bbef8f00 | A poll on the foundations of quantum theory
Erwin Schrödinger. Discussions of quantum foundations often seem to involve this fellow's much abused cat.
Erwin Schrödinger. Discussions of quantum foundations often seem to involve his much abused cat.
The group of physicists seriously engaged in studies of the “foundations” or “interpretation” of quantum theory is a small sliver of the broader physics community (perhaps a few hundred scientists among tens of thousands). Yet in my experience most scientists doing research in other areas of physics enjoy discussing foundational questions over coffee or beer.
The central question concerns quantum measurement. As often expressed, the axioms of quantum mechanics (see Sec. 2.1 of my notes here) distinguish two different ways for a quantum state to change. When the system is not being measured its state vector rotates continuously, as described by the Schrödinger equation. But when the system is measured its state “collapses” discontinuously. The Measurement Problem (or at least one version of it) is the challenge to explain why the mathematical description of measurement is different from the description of other physical processes.
My own views on such questions are rather unsophisticated and perhaps a bit muddled:
1) I know no good reason to disbelieve that all physical processes, including measurements, can be described by the Schrödinger equation.
2) But to describe measurement this way, we must include the observer as part of the evolving quantum system.
3) This formalism does not provide us observers with deterministic predictions for the outcomes of the measurements we perform. Therefore, we are forced to use probability theory to describe these outcomes.
4) Once we accept this role for probability (admittedly a big step), then the Born rule (the probability is proportional to the modulus squared of the wave function) follows from simple and elegant symmetry arguments. (These are described for example by Zurek – see also my class notes here. As a technical aside, what is special about the L2 norm is its rotational invariance, implying that the probability measure picks out no preferred basis in the Hilbert space.)
5) The “classical” world arises due to decoherence, that is, pervasive entanglement of an observed quantum system with its unobserved environment. Decoherence picks out a preferred basis in the Hilbert space, and this choice of basis is determined by properties of the Hamiltonian, in particular its spatial locality.
Continue reading |
d5eb98bf1a256a6c | MFV3D Book Archive > Atomic Nuclear Physics > Download A collection of problems in atomic and nuclear physics by I. E Irodov PDF
By I. E Irodov
Show description
Read Online or Download A collection of problems in atomic and nuclear physics PDF
Similar atomic & nuclear physics books
Cumulative Subject and Author Indexes for Volumes 1-38
Those indexes are necessary volumes within the serial, bringing jointly what has been released over the last 38 volumes. They contain a preface by way of the editor of the sequence, an writer index, an issue index, a cumulative record of bankruptcy titles, and listings of contents through quantity. summary: those indexes are invaluable volumes within the serial, bringing jointly what has been released during the last 38 volumes.
Many-Body Schrödinger Dynamics of Bose-Einstein Condensates
At tremendous low temperatures, clouds of bosonic atoms shape what's often called a Bose-Einstein condensate. lately, it has turn into transparent that many differing types of condensates -- so known as fragmented condensates -- exist. so as to inform no matter if fragmentation happens or now not, it is crucial to unravel the total many-body Schrödinger equation, a job that remained elusive for experimentally appropriate stipulations for a few years.
The Theory of Coherent Atomic Excitation (two-volume set)
This e-book examines the character of the coherent excitation produced in atoms through lasers. It examines the special brief version of excited-state populations with time and with controllable parameters resembling laser frequency and depth. The dialogue assumes modest previous wisdom of undemanding quantum mechanics and, in a few sections, nodding acquaintance with Maxwell's equations of electrodynamics.
Electron-Electron Correlation Effects in Low-Dimensional Conductors and Superconductors
Advances within the physics and chemistry of low-dimensional structures were relatively extraordinary within the previous couple of a long time. hundreds and hundreds of quasi-one-dimensional and quasi-two-dimensional platforms were synthesized and studied. the preferred representatives of quasi-one-dimensional fabrics are polyacethylenes CH [1] and undertaking donor-acceptor molecular crystals TIF z TCNQ.
Additional resources for A collection of problems in atomic and nuclear physics
Example text
C. magnetic field v o . 40 MHz. Deterr~llr:e the gyromagnetic ratio and nuclear magnetic moment. 29. The magnetic resonance method was used to study the magnetic properties of 7Li 19 F molecules whose el~ctron shells possess the zero angular momentum. c. magnetic field. The c~)lltro) expenmen~s showed that the peaks belong to lithium and fluo~me atom~ respectIvely. Find the magnetic moments of these nucleI. The spms of the nuclei are supposed to be known. 30. 'On the basis of that assumptiOn, evaluate the lughest kinetic energy of nucleons inside a nucleus.
10 22 cm-s. 25. 0 ~ 1. 7 /lm at very low temperatures. Calculate the temperature coefficient of resistance of this semiconductor at T = 300 K. 26. 2 ~ times, when the temperature is -2 \ raised from T 1 = 300 K to T 2 = \ = 400 K. 27. Figure 28 illustrates the ~ f-logarithmic electric conductance as -6 a function of reciprocal tempera[} 1 2 0 ture (T in Kelvins) for boron-doped Fig. 28 silicon (n-type semiconductor). Explain the shape of the graph. By means of the graph find the width of the forbidden band in silicon and activation energy of boron atoms.
25 Find the half-lives of both components and the ratio of radioactive nuclei of these components at the moment t = O. 13. A radionuclide A 1 with decay constant A1 transforms into a radionuclide A 2 with decay constant A2. Assuming that at the initial moment the preparation consisted of only N 10 nuclei of radionuclide A 1 , find: (a) the number of nuclei of radionuclide A 2 after a time interval t; (b) the time interval after which the number of nuclei of radionuclide A 2 reaches the maximum value; (c) under what condition the transitional equilibrium state can evolve, so that the ratio of the amounts of the radionuclides remains constant.
Download PDF sample
Rated 4.82 of 5 – based on 48 votes |
fa637331e353eb7c | Relativistic wave equations
From Wikipedia, the free encyclopedia
(Redirected from Relativistic wave equation)
Jump to: navigation, search
"Relativistic quantum field equations" redirects to here.
In physics, specifically relativistic quantum mechanics (RQM) and its applications to particle physics, relativistic wave equations predict the behavior of particles at high energies and velocities comparable to the speed of light. In the context of quantum field theory (QFT), the equations determine the dynamics of quantum fields.
The solutions to the equations, universally denoted as ψ or Ψ (Greek psi), are referred to as "wavefunctions" in the context of RQM, and "fields" in the context of QFT. The equations themselves are called "wave equations" or "field equations", because they have the mathematical form of a wave equation or are generated from a Lagrangian density and the field-theoretic Euler–Lagrange equations (see classical field theory for background).
In the Schrödinger picture, the wavefunction or field is the solution to the Schrödinger equation;
i\hbar\frac{\partial}{\partial t}\psi = \hat{H} \psi
one of the postulates of quantum mechanics. All relativistic wave equations can be constructed by specifying various forms of the Hamiltonian operator Ĥ describing the quantum system. Alternatively, Feynman's path integral formulation uses a Lagrangian rather than a Hamiltonian operator.
More generally - the modern formalism behind relativistic wave equations is Lorentz group theory, wherein the spin of the particle has a correspondence with the representations of the Lorentz group.[1]
Early 1920s: Classical and quantum mechanics[edit]
The failure of classical mechanics applied to molecular, atomic, and nuclear systems and smaller induced the need for a new mechanics: quantum mechanics. The mathematical formulation was led by De Broglie, Bohr, Schrödinger, Pauli, and Heisenberg, and others, around the mid-1920s, and at that time was analogous to that of classical mechanics. The Schrödinger equation and the Heisenberg picture resemble the classical equations of motion in the limit of large quantum numbers and as the reduced Planck constant ħ, the quantum of action, tends to zero. This is the correspondence principle. At this point, special relativity was not fully combined with quantum mechanics, so the Schrödinger and Heisenberg formulations, as originally proposed, could not be used in situations where the particles travel near the speed of light, or when the number of each type of particle changes (this happens in real particle interactions; the numerous forms of particle decays, annihilation, matter creation, pair production, and so on).
Late 1920s: Relativistic quantum mechanics of spin-0 and spin-1/2 particles[edit]
A description of quantum mechanical systems which could account for relativistic effects was sought for by many theoretical physicists; from the late 1920s to the mid-1940s.[2] The first basis for relativistic quantum mechanics, i.e. special relativity applied with quantum mechanics together, was found by all those who discovered what is frequently called the Klein–Gordon equation:
-\hbar^2\frac{\partial^2 \psi}{\partial t^2} +(\hbar c)^2\nabla^2\psi = (mc^2)^2\psi \,,
by inserting the energy operator and momentum operator into the relativistic energy–momentum relation:
The solutions to (1) are scalar fields. The KG equation is undesirable due to its prediction of negative energies and probabilities, as a result of the quadratic nature of (2) - inevitable in a relativistic theory. This equation was initially proposed by Schrödinger, and he discarded it for such reasons, only to realize a few months later that its non-relativistic limit (what is now called the Schrödinger equation) was still of importance. Nevertheless - (1) is applicable to spin-0 bosons.[3]
Neither the non-relativistic nor relativistic equations found by Schrödinger could predict the hyperfine structure in the Hydrogen spectral series. The mysterious underlying property was spin. The first two-dimensional spin matrices (better known as the Pauli matrices) were introduced by Pauli in the Pauli equation; the Schrödinger equation with a non-relativistic Hamiltonian including an extra term for particles in magnetic fields, but this was phenomological. Weyl found a relativistic equation in terms of the Pauli matrices; the Weyl equation, for massless spin-1/2 fermions. The problem was resolved by Dirac in the late 1920s, when he furthered the application of equation (2) to the electron – by various manipulations he factorized the equation into the form:
\left(\frac{E}{c} - \boldsymbol{\alpha}\cdot\mathbf{p} - \beta mc \right)\left(\frac{E}{c} + \boldsymbol{\alpha}\cdot\mathbf{p} + \beta mc \right)\psi=0 \,,
and one of these factors is the Dirac equation (see below), upon inserting the energy and momentum operators. For the first time, this introduced new four-dimensional spin matrices α and β in a relativistic wave equation, and explained the hyperfine structure of hydrogen. The solutions to (3A) are multi-component spinor fields, and each component satisfies (1). A remarkable result of spinor solutions is that half of the components describe a particle, while the other half describe an antiparticle; in this case the electron and positron. The Dirac equation is now known to apply for all massive spin-1/2 fermions. In the non-relativistic limit, the Pauli equation is recovered, while the massless case results in the Weyl equation.
Although a landmark in quantum theory, the Dirac equation is only true for spin-1/2 fermions, and still predicts negative energy solutions, which caused controversy at the time (in particular - not all physicists were comfortable with the "Dirac sea" of negative energy states).
1930s–1960s: Relativistic quantum mechanics of higher-spin particles[edit]
The natural problem became clear: to generalize the Dirac equation to particles with any spin; both fermions and bosons, and in the same equations their antiparticles (possible because of the spinor formalism introduced by Dirac in his equation, and then-recent developments in spinor calculus by van der Waerden in 1929), and ideally with positive energy solutions.[2]
This was introduced and solved by Majorana in 1932, by a deviated approach to Dirac. Majorana considered one "root" of (3A):
where ψ is a spinor field now with infinitely many components, irreducible to a finite number of tensors or spinors, to remove the indeterminacy in sign. The matrices α and β are infinite-dimensional matrices, related to infinitesimal Lorentz transformations. He did not demand that each component of to satisfy equation (2), instead he regenerated the equation using a Lorentz-invariant action, via the principle of least action, and application of Lorentz group theory.[4][5]
Majorana produced other important contributions that were unpublished, including wave equations of various dimensions (5, 6, and 16). They were anticipated later (in a more involved way) by de Broglie (1934), and Duffin, Kemmer, and Petiau (around 1938–1939), see Duffin–Kemmer–Petiau algebra. The Dirac–Fierz–Pauli formalism was more sophisticated than Majorana’s, as spinors were new mathematical tools in the early twentieth century, although Majorana’s paper of 1932 was difficult to fully understand; it took Pauli and Wigner some time to understand it, around 1940.[2]
Dirac in 1936, and Fierz and Pauli in 1939, built equations from irreducible spinors A and B, symmetric in all indices, for a massive particle of spin n + ½ for integer n (see Van der Waerden notation for the meaning of the dotted indices):
p_{\gamma\dot{\alpha}}A_{\epsilon_1\epsilon_2\cdots\epsilon_n}^{\dot{\alpha}\dot{\beta}_1\dot{\beta}_2\cdots\dot{\beta}_n} = mcB_{\gamma\epsilon_1\epsilon_2\cdots\epsilon_n}^{\dot{\beta}_1\dot{\beta}_2\cdots\dot{\beta}_n}
p^{\gamma\dot{\alpha}}B_{\gamma\epsilon_1\epsilon_2\cdots\epsilon_n}^{\dot{\beta}_1\dot{\beta}_2\cdots\dot{\beta}_n} = mcA_{\epsilon_1\epsilon_2\cdots\epsilon_n}^{\dot{\alpha}\dot{\beta}_1\dot{\beta}_2\cdots\dot{\beta}_n}
where p is the momentum as a covariant spinor operator. For n = 0, the equations reduce to the coupled Dirac equations and A and B together transform as the original Dirac spinor. Eliminating either A or B shows that A and B each fulfill (1).[2]
In 1941, Rarita and Schwinger focussed on spin-32 particles and derived the Rarita–Schwinger equation, including a Lagrangian to generate it, and later generalized the equations analogous to spin n + ½ for integer n. In 1945, Pauli suggested Majorana's 1932 paper to Bhabha, who returned to the general ideas introduced by Majorana in 1932. Bhabha and Lubanski proposed a completely general set of equations by replacing the mass terms in (3A) and (3B) by an arbitrary constant, subject to a set of conditions which the wavefunctions must obey.[6]
Finally, in the year 1948 (the same year as Feynman's path integral formulation was cast), Bargmann and Wigner formulated the general equation for massive particles which could have any spin, by considering the Dirac equation with a totally symmetric finite-component spinor, and using Lorentz group theory (as Majorana did): the Bargmann–Wigner equations.[2][7] In the early 1960s, a reformulation of the Bargmann–Wigner equations was made by H. Joos and Steven Weinberg. Various theorists at this time did further research in relativistic Hamiltonians for higher spin particles.[1][8][9]
The relativistic description of spin particles has been a difficult problem in quantum theory. It is still an area of the present-day research, because the problem is only partially solved; including interactions in the equations is problematic, and paradoxical predictions (even from the Dirac equation) are still present.[5]
Linear equations[edit]
Further information: Linear differential equation
The following equations have solutions which satisfy the superposition principle, that is, the wavefunctions are additive.
Throughout, the standard conventions of tensor index notation and Feynman slash notation are used, including Greek indices which take the values 1, 2, 3 for the spatial components and 0 for the timelike component of the indexed quantities. The wavefunctions are denoted ψ, and μ are the components of the four-gradient operator.
In matrix equations, the Pauli matrices are denoted by σμ in which μ = 0, 1, 2, 3, where σ0 is the 2 × 2 identity matrix:
\sigma^0 = \begin{pmatrix} 1&0 \\ 0&1 \\ \end{pmatrix}
and the other matrices have their usual representations. The expression
\sigma^\mu \partial_\mu \equiv \sigma^0 \partial_0 + \sigma^1 \partial_1 + \sigma^2 \partial_2 + \sigma^3 \partial_3
is a 2 × 2 matrix operator which acts on 2-component spinor fields.
The gamma matrices are denoted by γμ, in which again μ = 0, 1, 2, 3, and there are a number of representations to select from. The matrix γ0 is not necessarily the 4 × 4 identity matrix. The expression
i\hbar \gamma^\mu \partial_\mu + mc \equiv i\hbar(\gamma^0 \partial_0 + \gamma^1 \partial_1 + \gamma^2 \partial_2 + \gamma^3 \partial_3) + mc \begin{pmatrix}1&0&0&0\\ 0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&1 \end{pmatrix}
is a 4 × 4 matrix operator which acts on 4-component spinor fields.
Note that terms such as "mc" scalar multiply an identity matrix of the relevant dimension, the common sizes are 2 × 2 or 4 × 4, and are conventionally not written for simplicity.
Particle spin quantum number s Name Equation Typical particles the equation describes
0 Klein–Gordon equation (\hbar \partial_{\mu} + imc)(\hbar \partial^{\mu} -imc)\psi = 0 Massless or massive spin-0 particle (such as Higgs bosons).
1/2 Weyl equation \sigma^\mu\partial_\mu \psi=0 Massless spin-1/2 particles.
Dirac equation \left( i \hbar \partial\!\!\!/ - m c \right) \psi = 0 Massive spin-1/2 particles (such as electrons).
Two-body Dirac equations [(\gamma_1)_\mu (p_1-\tilde{A}_1)^\mu+m_1 + \tilde{S}_1]\Psi=0,
[(\gamma_2)_\mu (p_2-\tilde{A}_2)^\mu+m_2 + \tilde{S}_2]\Psi=0.
Majorana equation i \hbar \partial\!\!\!/ \psi - m c \psi_c = 0 Massive Majorana particles.
Breit equation i\hbar\frac{\partial \Psi}{\partial t} = \left(\sum_{i}\hat{H}_{D}(i) + \sum_{i>j}\frac{1}{r_{ij}} - \sum_{i>j}\hat{B}_{ij} \right) \Psi Two massive spin-1/2 particles (such as electrons) interacting electromagnetically to first order in perturbation theory.
1 Maxwell equations (in QED using the Lorenz gauge) \partial_\mu\partial^\mu A^\nu = e \overline{\psi} \gamma^\nu \psi Photons, massless spin-1 particles.
Proca equation \partial_\mu(\partial^\mu A^\nu - \partial^\nu A^\mu)+\left(\frac{mc}{\hbar}\right)^2 A^\nu=0 Massive spin-1 particle (such as W and Z bosons).
3/2 Rarita–Schwinger equation \epsilon^{\mu \nu \rho \sigma} \gamma^5 \gamma_\nu \partial_\rho \psi_\sigma + m\psi^\mu = 0 Massive spin-3/2 particles.
s Bargmann–Wigner equations
(-i\hbar \gamma^\mu \partial_\mu + mc)_{\alpha_1 \alpha_1'}\psi_{\alpha'_1 \alpha_2 \alpha_3 \cdots \alpha_{2s}} = 0
(-i\hbar \gamma^\mu \partial_\mu + mc)_{\alpha_2 \alpha_2'}\psi_{\alpha_1 \alpha'_2 \alpha_3 \cdots \alpha_{2s}} = 0
\qquad \vdots
(-i\hbar \gamma^\mu \partial_\mu + mc)_{\alpha_{2s} \alpha'_{2s}}\psi_{\alpha_1 \alpha_2 \alpha_3 \cdots \alpha'_{2s}} = 0
where ψ is a rank-2s 4-component spinor.
Free particles of arbitrary spin (bosons and fermions).[8][10]
Gauge fields[edit]
The Duffin–Kemmer–Petiau equation is an alternative equation for spin-0 and spin-1 particles:
(i \hbar \beta^{a} \partial_a - m c) \psi = 0
Non-linear equations[edit]
There are equations which have solutions that do not satisfy the superposition principle.
Gauge fields[edit]
Spin 2[edit]
The solution is a metric tensor field, rather than a wavefunction.
See also[edit]
1. ^ a b T Jaroszewicz, P.S Kurzepa (1992). "Geometry of spacetime propagation of spinning particles". Annals of Physics. doi:10.1016/0003-4916(92)90176-M.
2. ^ a b c d e S. Esposito (2011). "Searching for an equation: Dirac, Majorana and the others". arXiv:1110.6878.
3. ^ B. R. Martin, G.Shaw (2008). Particle Physics. Manchester Physics Series (3rd ed.). John Wiley & Sons. p. 3. ISBN 978-0-470-03294-7.
4. ^ R. Casalbuoni (2006). "Majorana and the Infinite Component Wave Equations". arXiv:hep-th/0610252.
5. ^ a b X. Bekaert, M.R. Traubenberg, M. Valenzuela (2009). "An infinite supermultiplet of massive higher-spin fields". arXiv:0904.2533.
6. ^ R.K. Loide, I. Ots, R. Saar (1997). "Bhabha relativistic wave equations". Bibcode:1997JPhA...30.4005L. doi:10.1088/0305-4470/30/11/027.
7. ^ Bargmann, V.; Wigner, E. P. (1948). "Group theoretical discussion of relativistic wave equations". Proc. Natl. Acad. Sci. U.S.A. 34 (5): 211–23. Bibcode:1948PNAS...34..211B. doi:10.1073/pnas.34.5.211.
8. ^ a b E.A. Jeffery (1978). "Component Minimization of the Bargman–Wigner wavefunction". Australian Journal of Physics 31: 137–149. Bibcode:1978AuJPh..31..137J.
9. ^ R.F Guertin (1974). "Relativistic hamiltonian equations for any spin". Annals of Physics. Bibcode:1974AnPhy..88..504G. doi:10.1016/0003-4916(74)90180-8.
10. ^ R.Clarkson, D.G.C. McKeon (2003). "Quantum Field Theory". pp. 61–69.
Further reading[edit]
• R.G. Lerner, G.L. Trigg (1991). Encyclopaedia of Physics (2nd ed.). VHC publishers. ISBN 0-89573-752-3.
• C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). ISBN 0-07-051400-3.
• G. Woan, Cambridge University Press (2010). The Cambridge Handbook of Physics Formulas. ISBN 978-0-521-57507-2.
• D. McMahon (2006). Relativity DeMystified. Mc Graw Hill (USA). ISBN 0-07-145545-0.
• J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman. ISBN 0-7167-0344-0.
• B.R. Martin, G. Shaw (2008). Particle Physics (Manchester series). John Wiley & Sons. ISBN 978-0-470-03294-7.
• P. Labelle, Demystified (2010). Supersymmetry. McGraw-Hill (USA). ISBN 978-0-07-163641-4.
• B.H. Bransden, C.J.Joachain (1983). Physics of Atoms and Molecules. Longman. ISBN 0-582-44401-2.
• E. Abers (2004). Quantum Mechanics. Addison Wesley. ISBN 978-0-13-146100-0.
• D. McMahon (2008). Quantum Field Theory. Mc Graw Hill (USA). ISBN 978-0-07-154382-8.
• M. Pillin (1993). "q-Deformed Relativistic Wave Equations". arXiv:hep-th/9310097. |
d0a0ef76bfbd7fdf | Take the tour ×
Let $M$ be a compact Riemannian manifold and $\Delta$ be the Laplace-Beltrami operator. It is well-known that the solution operator to the heat equation $e^{t \Delta}$ is smoothing for $t>0$ and has a smooth integral kernel $k_t(x, y) \in C^\infty(M \times M)$. Furthermore, $k_t$ has an asymptotic expansion $$ k_t(x, y) \sim \underbrace{(4 \pi t)^{-n/2} \exp \left( -\frac{1}{4t} \mathrm{dist}(x, y)^2 \right)}_{:= e_t(x, y)} \sum_{j=0}^\infty t^j \Phi_j(x, y) $$ meaning that $$ \left| k_t(x, y) - e_t(x, y) \sum_{j=0}^N t^j \Phi_j(x, y) \right| \leq C t^{N+1}$$ uniformly in $x$ and $y$ in a neighborhood of the diagonal.
Now my question is about the Schrödinger equation. The solution operator $e^{it\Delta}$ is not smoothing anymore in this case (as it is unitary), so it cannot have a smooth integral kernel, can it? However, by formally substituting $t \rightarrow it$, one gets the formal asymptotic series $$ e_{it}(x, y) \sum_{j=0}^\infty (it)^j \Phi_j(x, y),$$ but apparently this does not have anything to do with $e^{it\Delta}$?
Honestly, I do not have a precise question, rather a catalogue of questions about this situation:
• Does this "formal Schrödinger kernel" make any sense?
• Why is the Schrödinger kernel smooth on $\mathbb{R}^n$ (it is given by $e_{it}(x, y)$), but not an a compact manifold (or am I completely mistaken here)?
• What can generally be said here to make this situation more clear?
share|improve this question
Why do you think unitary would prevent smoothing? – Willie Wong Jan 11 at 14:25
Because a unitary operator maps $L^2$ to $L^2$ isometrically... – Kofi Jan 11 at 14:39
So? The Schrodinger operator on $\mathbb{R}^n$ is unitary and is smoothing. And for that matter, the heat operator maps $L^1$ to itself preserving norms. Yet it is also smoothing even on manifolds. – Willie Wong Jan 11 at 14:41
I don't see what you mean. The Schrödinger solution operator $U = e^{it\Delta}$ does not map $L^2(\mathbb{R}^n)$ to $C^\infty$, as taking $u := U^{-1}f$ for any non-smooth funtion $f$ gives a contradiction! On the other hand, the heat operator preserves the $L^1$-Norm, but not the $L^2$ norm and is not unitary! – Kofi Jan 14 at 20:15
I meant smoothing in the sense of "having a smooth integral kernel", since that is ultimately what you are interested in. The point is that the smoothing effect as seen from convolving against a smooth integral kernel does apply to a family of functions; it just happens that this family of functions is not the whole of $L^2$. – Willie Wong Jan 16 at 9:16
show 1 more comment
1 Answer
up vote 3 down vote accepted
Unitarity has rather little to do with it, as the Schrodinger operator on $\mathbb{R}^n$ is unitary, and for any rapidly decreasing initial data (no regularity assumptions here! just decay ones) we have in fact that the solution is smooth for all positive times.
Compactness, however, of the manifold has quite a lot to do with it. This is because compactness implies that every geodesic is trapped, so we cannot have dispersion to infinity. More precisely:
Consider first the linear wave equation. We know that this equation has propagation of singularities along null geodesics. Roughly speaking we have that all frequencies are transported at the same speed and so if a collection of plane waves add to produce a singularity at time $t$, it will continue to do so at later times.
For the linear Schrodinger equation, the situation is different, the frequencies are not all traveling at the same speed. So if you have a high frequency wave packet and a low frequency one, some time later their spatial support will separate and won't constructively add to a singularity. This is why Schrodinger equation is smoothing for rapidly decaying initial data: if the data is decaying fast, all the action starts out near the origin, and so after some small time the wave packets, which were all originally located near the origin, now burst all over the place and cannot add up to a singularity anymore.
However, if you now try to do Schrodinger's equation on a manifold for which the geodesic flow no longer guarantees that wave packets be transported by distance $\approx |\xi|t$, where $\xi$ is the frequency of the wave packet and $t$ is the elapsed time, then the above smoothing heuristic will no longer work. And in fact, this argument can be made rigorously in the case of non compact, asymptotically flat manifolds. See
Craig, Walter, On the microlocal regularity of the Schrödinger kernel. Partial differential equations and their applications (Toronto, ON, 1995), 71--90, CRM Proc. Lecture Notes, 12, Amer. Math. Soc., Providence, RI, 1997
In the case of a compact manifold, no geodesic can "escape" to infinity, so all wave packets will remain in finite distance of each other. By a covering argument, there will necessarily be points where a infinite number of the wave packets can accumulate and potentially cause the solution to be singular. This intuition has been carried out in special cases. For example, it is known that the Schrodinger kernel on the sphere $\mathbb{S}^d$ is a distribution with singular support in all of $\mathbb{R}\times \mathbb{S}^d$.
You can find more references in this MathOverflow post of Mazzeo.
Edit Let me expand a bit further on my comment, which may give you an answer to your second question.
The main issue is the following: when we think of a "convolution kernel" as a solution to an evolutionary partial differential equation we generally expect the kernel $E_t$ to be in $(C^\infty_c(X))'$, where $X$ is the background manifold. That is to say, we expect $E_t$ to be a distribution for each time $t$. By convolution we can guarantee that for any $v$ a distribution with compact support (in notations, $v\in \mathcal{E}'(X)$) that $E_t*v$ is a distribution, and we have a distributional solution to the Cauchy problem. Now, if for $t > 0$ we have that the singular support of $E_t$ is the empty set, then by the properties of the convolution we have that $E_t*v \in C^\infty(X)$.
This is what I think of as "smoothing", and we see that it is immediately tied to the singular support of the convolution kernel.
Where compactness enters is the following trivial fact:
If $X$ is compact, then $C^\infty(X) = C^\infty_c(X)$, and the space of distributions and the space of distributions with compact support are the same.
We know that $L^2(X) \subset (C^\infty_c(X))'$, that is, $L^2$ functions are locally integrable and can be interpreted as distributions. In general, however, $L^2$ functions do not have compact support. But by the above trivial fact, we have that if $X$ is compact manifold, $L^2(X) \subset \mathcal{E}'(X)$. This implies that a smoothing kernel on a compact manifold will smooth any $L^2(X)$ function. This is what justifies your reasoning that on a compact manifold the Schrodinger kernel cannot be smooth.
On the other hand, this argument breaks whenever $X$ is non-compact. As $L^2(\mathbb{R}^n) \setminus \mathcal{E}'(\mathbb{R}^n)$ is non-empty, the originally defined convolution kernel cannot necessarily be applied to all $L^2$ functions (the convolution of two distributions of non-compact support may fail to be a distribution). For the case of the Schrodinger operators, as it turns out, we can take the convolution and still end up with a distribution, but the uniform estimates required for "smooth kernel implies smooth solution" is no longer true on the whole of $L^2(\mathbb{R}^n)$. Hence in general on a non-compact manifold $X$ one cannot conclude "unitary on $L^2(X) \implies $ lack of smoothing on $\mathcal{E}'(X)$".
share|improve this answer
add comment
Your Answer
|
a6daf3a9066d347d | Course Meeting Times
Lectures: 3 sessions / week, 1 hour / session
Recitations: 2 sessions / week, 1 hour / session
Amazon logo Krenos, John. Chemical Principles: The Quest for Insight/Student Study Guide and Solutions Manual. 4th ed. New York, NY: W.H. Freeman and Company, 2007. ISBN: 9781429200998. (Amazon logo Bundled set. ISBN: 9781429212595.)
Lecture Notes
Grades will be based on a total of 750 points.
Three one-hour exams (3 x 100 points) 300
Three-hour final exam 300
Problem sets 100
Attendance and in-class "quizzes" 50
There will be 10 problem sets assigned during the semester. Assignments will be graded, and will be worth a total of 100 points of your final grade. The problem sets are not included in these course materials.
There will be three hour-long exams during the semester and a three-hour-long final exam. All exams are closed-book and closed-notes. Most required equations and a periodic table will be provided.
Biology Topics
In an effort to illuminate connections between chemistry and biology and spark students' excitement for chemistry, we incorporate frequent biology-related examples into the lectures. These in-class examples range from two to ten minutes, designed to succinctly introduce biological connections without sacrificing any chemistry content in the curriculum.
Significant Figures
Rules for scientific notation and significant figures are available in the back of the textbook in Appendix 1, pages A5-A6. You are also responsible for knowing the following SI prefixes: n (nano, 10-9), µ (micro, 10-6), m (milli, 10-3), c (centi, 10-2), and k (kilo, 103)
Clicker Questions
We will use classroom response devices during lectures to take attendance, enable feedback, and facilitate occasional in-class quizzes. We have outlined the following points to help clarify the class policies regarding clicker use.
Why are we using clickers?
1. Clickers give us additional feedback on whether the class as a whole understands a given concept or when our explanations need to be expanded or clarified. This enables us to gauge the understanding of the entire class and adjust our lessons accordingly.
2. Clickers also provide you as a student feedback on how well you understand the material and how fast you are able to solve problems. For example, if you are able to solve the homework problems but run out of time on in-class slicker questions, it is a good indication that you will be pinched for time on the exam and may need to work through more practice problems to increase your speed.
3. We feel it is appropriate to reward the many students that consistently come to class and participate. In addition, because we take attendance we feel more comfortable posting lecture notes online.
Answering in-class clicker questions
1. Apart from announced in-class quiz questions, you will not be graded on whether you answer clicker questions correctly.
2. For routine clicker questions, you are encouraged to attempt the question on your own, but you are certainly allowed to quietly discuss the problem with your neighbor. For announced quiz questions, any talking or sharing answers in considered cheating.
The calendar below provides information on the course's lecture (L) and exam (E) sessions.
L1 The importance of chemical principles
L2 Discovery of electron and nucleus, need for quantum mechanics
L3 Wave-particle duality of light
L4 Wave-particle duality of matter, Schrödinger equation
L5 Hydrogen atom energy levels Problem set 1 due
L6 Hydrogen atom wavefunctions (orbitals)
L7 p-orbitals
L8 Multielectron atoms and electron configurations Problem set 2 due
L9 Periodic trends
L10 Periodic trends continued; Covalent bonds Problem set 3 due
L11 Lewis structures
E1 Exam 1 covering lectures 1-9
L12 Exceptions to Lewis structure rules; Ionic bonds
L13 Polar covalent bonds; VSEPR theory
L14 Molecular orbital theory
L15 Valence bond theory and hybridization Problem set 4 due
L16 Determining hybridization in complex molecules; Thermochemistry and bond energies/bond enthalpies
L17 Entropy and disorder Problem set 5 due
L18 Free energy and control of spontaneity
E2 Exam 2 covering lectures 10-16
L19 Chemical equilibrium
L20 Le Chatelier's principle and applications to blood-oxygen levels
L22 Chemical and biological buffers Problem set 6 due
L23 Acid-base titrations
L24 Balancing oxidation/reduction equations
L25 Electrochemical cells Problem set 7 due
L26 Chemical and biological oxidation/reduction reactions
L27 Transition metals and the treatment of lead poisoning Problem set 8 due
L28 Crystal field theory
E3 Exam 3 covering lectures 17-26
L29 Metals in biology
L30 Magnetism and spectrochemical theory
L31 Rate laws Problem set 9 due
L32 Nuclear chemistry and elementary reactions
L33 Reaction mechanism
L34 Temperature and kinetics Problem set 10 due
L35 Enzyme catalysis
L36 Biochemistry
E4 Final exam covering lecture 1-36 |
430943882da6b729 | Causal Determinism
First published Thu Jan 23, 2003; substantive revision Thu Jan 21, 2010
1. Introduction
In most of what follows, I will speak simply of determinism, rather than of causal determinism. This follows recent philosophical practice of sharply distinguishing views and theories of what causation is from any conclusions about the success or failure of determinism (cf. Earman, 1986; an exception is Mellor 1994). For the most part this disengagement of the two concepts is appropriate. But as we will see later, the notion of cause/effect is not so easily disengaged from much of what matters to us about determinism.
Traditionally determinism has been given various, usually imprecise definitions. This is only problematic if one is investigating determinism in a specific, well-defined theoretical context; but it is important to avoid certain major errors of definition. In order to get started we can begin with a loose and (nearly) all-encompassing definition as follows:
The italicized phrases are elements that require further explanation and investigation, in order for us to gain a clear understanding of the concept of determinism.
The roots of the notion of determinism surely lie in a very common philosophical idea: the idea that everything can, in principle, be explained, or that everything that is, has a sufficient reason for being and being as it is, and not otherwise. In other words, the roots of determinism lie in what Leibniz named the Principle of Sufficient Reason. But since precise physical theories began to be formulated with apparently deterministic character, the notion has become separable from these roots. Philosophers of science are frequently interested in the determinism or indeterminism of various theories, without necessarily starting from a view about Leibniz' Principle.
Since the first clear articulations of the concept, there has been a tendency among philosophers to believe in the truth of some sort of determinist doctrine. There has also been a tendency, however, to confuse determinism proper with two related notions: predictability and fate.
Fatalism is easily disentangled from determinism, to the extent that one can disentangle mystical forces and gods' wills and foreknowledge (about specific matters) from the notion of natural/causal law. Not every metaphysical picture makes this disentanglement possible, of course. As a general matter, we can imagine that certain things are fated to happen, without this being the result of deterministic natural laws alone; and we can imagine the world being governed by deterministic laws, without anything at all being fated to occur (perhaps because there are no gods, nor mystical forces deserving the titles fate or destiny, and in particular no intentional determination of the “initial conditions” of the world). In a looser sense, however, it is true that under the assumption of determinism, one might say that given the way things have gone in the past, all future events that will in fact happen are already destined to occur.
Prediction and determinism are also easy to disentangle, barring certain strong theological commitments. As the following famous expression of determinism by Laplace shows, however, the two are also easy to commingle:
In this century, Karl Popper defined determinism in terms of predictability also.
Laplace probably had God in mind as the powerful intelligence to whose gaze the whole future is open. If not, he should have: 19th and 20th century mathematical studies have shown convincingly that neither a finite, nor an infinite but embedded-in-the-world intelligence can have the computing power necessary to predict the actual future, in any world remotely like ours. “Predictability” is therefore a façon de parler that at best makes vivid what is at stake in determinism; in rigorous discussions it should be eschewed. The world could be highly predictable, in some senses, and yet not deterministic; and it could be deterministic yet highly unpredictable, as many studies of chaos (sensitive dependence on initial conditions) show.
Predictability does however make vivid what is at stake in determinism: our fears about our own status as free agents in the world. In Laplace's story, a sufficiently bright demon who knew how things stood in the world 100 years before my birth could predict every action, every emotion, every belief in the course of my life. Were she then to watch me live through it, she might smile condescendingly, as one who watches a marionette dance to the tugs of strings that it knows nothing about. We can't stand the thought that we are (in some sense) marionettes. Nor does it matter whether any demon (or even God) can, or cares to, actually predict what we will do: the existence of the strings of physical necessity, linked to far-past states of the world and determining our current every move, is what alarms us. Whether such alarm is actually warranted is a question well outside the scope of this article (see the entries on free will and incompatibilist theories of freedom). But a clear understanding of what determinism is, and how we might be able to decide its truth or falsity, is surely a useful starting point for any attempt to grapple with this issue. We return to the issue of freedom in Determinism and Human Action below.
2. Conceptual Issues in Determinism
Recall that we loosely defined causal determinism as follows, with terms in need of clarification italicized:
2.1 The World
Why should we start so globally, speaking of the world, with all its myriad events, as deterministic? One might have thought that a focus on individual events is more appropriate: an event E is causally determined if and only if there exists a set of prior events {A, B, C …} that constitute a (jointly) sufficient cause of E. Then if all—or even just most—events E that are our human actions are causally determined, the problem that matters to us, namely the challenge to free will, is in force. Nothing so global as states of the whole world need be invoked, nor even a complete determinism that claims all events to be causally determined.
For a variety of reasons this approach is fraught with problems, and the reasons explain why philosophers of science mostly prefer to drop the word “causal” from their discussions of determinism. Generally, as John Earman quipped (1986), to go this route is to “… seek to explain a vague concept—determinism—in terms of a truly obscure one—causation.” More specifically, neither philosophers' nor laymen's conceptions of events have any correlate in any modern physical theory.[1] The same goes for the notions of cause and sufficient cause. A further problem is posed by the fact that, as is now widely recognized, a set of events {A, B, C …} can only be genuinely sufficient to produce an effect-event if the set includes an open-ended ceteris paribus clause excluding the presence of potential disruptors that could intervene to prevent E. For example, the start of a football game on TV on a normal Saturday afternoon may be sufficient ceteris paribus to launch Ted toward the fridge to grab a beer; but not if a million-ton asteroid is approaching his house at .75c from a few thousand miles away, nor if the phone is about to ring with news of a tragic nature, …, and so on. Bertrand Russell famously argued against the notion of cause along these lines (and others) in 1912, and the situation has not changed. By trying to define causal determination in terms of a set of prior sufficient conditions, we inevitably fall into the mess of an open-ended list of negative conditions required to achieve the desired sufficiency.
Moreover, thinking about how such determination relates to free action, a further problem arises. If the ceteris paribus clause is open-ended, who is to say that it should not include the negation of a potential disruptor corresponding to my freely deciding not to go get the beer? If it does, then we are left saying “When A, B, C, … Ted will then go to the fridge for a beer, unless D or E or F or … or Ted decides not to do so.” The marionette strings of a “sufficient cause” begin to look rather tenuous.
They are also too short. For the typical set of prior events that can (intuitively, plausibly) be thought to be a sufficient cause of a human action may be so close in time and space to the agent, as to not look like a threat to freedom so much as like enabling conditions. If Ted is propelled to the fridge by {seeing the game's on; desiring to repeat the satisfactory experience of other Saturdays; feeling a bit thirsty; etc}, such things look more like good reasons to have decided to get a beer, not like external physical events far beyond Ted's control. Compare this with the claim that {state of the world in 1900; laws of nature} entail Ted's going to get the beer: the difference is dramatic. So we have a number of good reasons for sticking to the formulations of determinism that arise most naturally out of physics. And this means that we are not looking at how a specific event of ordinary talk is determined by previous events; we are looking at how everything that happens is determined by what has gone before. The state of the world in 1900 only entails that Ted grabs a beer from the fridge by way of entailing the entire physical state of affairs at the later time.
2.2 The way things are at a time t
The typical explication of determinism fastens on the state of the (whole) world at a particular time (or instant), for a variety of reasons. We will briefly explain some of them. Why take the state of the whole world, rather than some (perhaps very large) region, as our starting point? One might, intuitively, think that it would be enough to give the complete state of things on Earth, say, or perhaps in the whole solar system, at t, to fix what happens thereafter (for a time at least). But notice that all sorts of influences from outside the solar system come in at the speed of light, and they may have important effects. Suppose Mary looks up at the sky on a clear night, and a particularly bright blue star catches her eye; she thinks “What a lovely star; I think I'll stay outside a bit longer and enjoy the view.” The state of the solar system one month ago did not fix that that blue light from Sirius would arrive and strike Mary's retina; it arrived into the solar system only a day ago, let's say. So evidently, for Mary's actions (and hence, all physical events generally) to be fixed by the state of things a month ago, that state will have to be fixed over a much larger spatial region than just the solar system. (If no physical influences can go faster than light, then the state of things must be given from a spherical volume of space 1 light-month in radius.)
But in making vivid the “threat” of determinism, we often want to fasten on the idea of the entire future of the world as being determined. No matter what the “speed limit” on physical influences is, if we want the entire future of the world to be determined, then we will have to fix the state of things over all of space, so as not to miss out something that could later come in “from outside” to spoil things. In the time of Laplace, of course, there was no known speed limit to the propagation of physical things such as light-rays. In principle light could travel at any arbitrarily high speed, and some thinkers did suppose that it was transmitted “instantaneously.” The same went for the force of gravity. In such a world, evidently, one has to fix the state of things over the whole of the world at a time t, in order for events to be strictly determined, by the laws of nature, for any amount of time thereafter.
In all this, we have been presupposing the common-sense Newtonian framework of space and time, in which the world-at-a-time is an objective and meaningful notion. Below when we discuss determinism in relativistic theories we will revisit this assumption.
2.3 Thereafter
For a wide class of physical theories (i.e., proposed sets of laws of nature), if they can be viewed as deterministic at all, they can be viewed as bi-directionally deterministic. That is, a specification of the state of the world at a time t, along with the laws, determines not only how things go after t, but also how things go before t. Philosophers, while not exactly unaware of this symmetry, tend to ignore it when thinking of the bearing of determinism on the free will issue. The reason for this is that we tend to think of the past (and hence, states of the world in the past) as done, over, fixed and beyond our control. Forward-looking determinism then entails that these past states—beyond our control, perhaps occurring long before humans even existed—determine everything we do in our lives. It then seems a mere curious fact that it is equally true that the state of the world now determines everything that happened in the past. We have an ingrained habit of taking the direction of both causation and explanation as being past—-present, even when discussing physical theories free of any such asymmetry. We will return to this point shortly.
Another point to notice here is that the notion of things being determined thereafter is usually taken in an unlimited sense—i.e., determination of all future events, no matter how remote in time. But conceptually speaking, the world could be only imperfectly deterministic: things could be determined only, say, for a thousand years or so from any given starting state of the world. For example, suppose that near-perfect determinism were regularly (but infrequently) interrupted by spontaneous particle creation events, which occur on average only once every thousand years in a thousand-light-year-radius volume of space. This unrealistic example shows how determinism could be strictly false, and yet the world be deterministic enough for our concerns about free action to be unchanged.
2.4 Laws of nature
In the loose statement of determinism we are working from, metaphors such as “govern” and “under the sway of” are used to indicate the strong force being attributed to the laws of nature. Part of understanding determinism—and especially, whether and why it is metaphysically important—is getting clear about the status of the presumed laws of nature.
In the physical sciences, the assumption that there are fundamental, exceptionless laws of nature, and that they have some strong sort of modal force, usually goes unquestioned. Indeed, talk of laws “governing” and so on is so commonplace that it takes an effort of will to see it as metaphorical. We can characterize the usual assumptions about laws in this way: the laws of nature are assumed to be pushy explainers. They make things happen in certain ways , and by having this power, their existence lets us explain why things happen in certain ways. (For a recent defense of this perspective on laws, see Maudlin (2007)). Laws, we might say, are implicitly thought of as the cause of everything that happens. If the laws governing our world are deterministic, then in principle everything that happens can be explained as following from states of the world at earlier times. (Again, we note that even though the entailment typically works in the future past direction also, we have trouble thinking of this as a legitimate explanatory entailment. In this respect also, we see that laws of nature are being implicitly treated as the causes of what happens: causation, intuitively, can only go past future.)
It is a remarkable fact that philosophers tend to acknowledge the apparent threat determinism poses to free will, even when they explicitly reject the view that laws are pushy explainers. Earman (1986), for example, explicitly adopts a theory of laws of nature that takes them to be simply the best system of regularities that systematizes all the events in universal history. This is the Best Systems Analysis (BSA), with roots in the work of Hume, Mill and Ramsey, and most recently refined and defended by David Lewis (1973, 1994) and by Earman (1984, 1986). (cf. entry on laws of nature). Yet he ends his comprehensive Primer on Determinism with a discussion of the free will problem, taking it as a still-important and unresolved issue. Prima facie at least, this is quite puzzling, for the BSA is founded on the idea that the laws of nature are ontologically derivative, not primary; it is the events of universal history, as brute facts, that make the laws be what they are, and not vice-versa. Taking this idea seriously, the actions of every human agent in history are simply a part of the universe-wide pattern of events that determines what the laws are for this world. It is then hard to see how the most elegant summary of this pattern, the BSA laws, can be thought of as determiners of human actions. The determination or constraint relations, it would seem, can go one way or the other, not both!
On second thought however it is not so surprising that broadly Humean philosophers such as Ayer, Earman, Lewis and others still see a potential problem for freedom posed by determinism. For even if human actions are part of what makes the laws be what they are, this does not mean that we automatically have freedom of the kind we think we have, particularly freedom to have done otherwise given certain past states of affairs. It is one thing to say that everything occurring in and around my body, and everything everywhere else, conforms to Maxwell's equations and thus the Maxwell equations are genuine exceptionless regularities, and that because they in addition are simple and strong, they turn out to be laws. It is quite another thing to add: thus, I might have chosen to do otherwise at certain points in my life, and if I had, then Maxwell's equations would not have been laws. One might try to defend this claim—unpalatable as it seems intuitively, to ascribe ourselves law-breaking power—but it does not follow directly from a Humean approach to laws of nature. Instead, on such views that deny laws most of their pushiness and explanatory force, questions about determinism and human freedom simply need to be approached afresh.
A second important genre of theories of laws of nature holds that the laws are in some sense necessary. For any such approach, laws are just the sort of pushy explainers that are assumed in the traditional language of physical scientists and free will theorists. But a third and growing class of philosophers holds that (universal, exceptionless, true) laws of nature simply do not exist. Among those who hold this are influential philosophers such as Nancy Cartwright, Bas van Fraassen, and John Dupré. For these philosophers, there is a simple consequence: determinism is a false doctrine. As with the Humeans, this does not mean that concerns about human free action are automatically resolved; instead, they must be addressed afresh in the light of whatever account of physical nature without laws is put forward. See Dupré (2001) for one such discussion.
2.5 Fixed
We can now put our—still vague—pieces together. Determinism requires a world that (a) has a well-defined state or description, at any given time, and (b) laws of nature that are true at all places and times. If we have all these, then if (a) and (b) together logically entail the state of the world at all other times (or, at least, all times later than that given in (b)), the world is deterministic. Logical entailment, in a sense broad enough to encompass mathematical consequence, is the modality behind the determination in “determinism.”
3. The Epistemology of Determinism
How could we ever decide whether our world is deterministic or not? Given that some philosophers and some physicists have held firm views—with many prominent examples on each side—one would think that it should be at least a clearly decidable question. Unfortunately, even this much is not clear, and the epistemology of determinism turns out to be a thorny and multi-faceted issue.
3.1 Laws again
As we saw above, for determinism to be true there have to be some laws of nature. Most philosophers and scientists since the 17th century have indeed thought that there are. But in the face of more recent skepticism, how can it be proven that there are? And if this hurdle can be overcome, don't we have to know, with certainty, precisely what the laws of our world are, in order to tackle the question of determinism's truth or falsity?
The first hurdle can perhaps be overcome by a combination of metaphysical argument and appeal to knowledge we already have of the physical world. Philosophers are currently pursuing this issue actively, in large part due to the efforts of the anti-laws minority. The debate has been most recently framed by Cartwright in The Dappled World (Cartwright 1999) in terms psychologically advantageous to her anti-laws cause. Those who believe in the existence of traditional, universal laws of nature are fundamentalists; those who disbelieve are pluralists. This terminology seems to be becoming standard (see Belot 2001), so the first task in the epistemology of determinism is for fundamentalists to establish the reality of laws of nature (see Hoefer 2002b).
Even if the first hurdle can be overcome, the second, namely establishing precisely what the actual laws are, may seem daunting indeed. In a sense, what we are asking for is precisely what 19th and 20th century physicists sometimes set as their goal: the Final Theory of Everything. But perhaps, as Newton said of establishing the solar system's absolute motion, “the thing is not altogether desperate.” Many physicists in the past 60 years or so have been convinced of determinism's falsity, because they were convinced that (a) whatever the Final Theory is, it will be some recognizable variant of the family of quantum mechanical theories; and (b) all quantum mechanical theories are non-deterministic. Both (a) and (b) are highly debatable, but the point is that one can see how arguments in favor of these positions might be mounted. The same was true in the 19th century, when theorists might have argued that (a) whatever the Final Theory is, it will involve only continuous fluids and solids governed by partial differential equations; and (b) all such theories are deterministic. (Here, (b) is almost certainly false; see Earman (1986),ch. XI). Even if we now are not, we may in future be in a position to mount a credible argument for or against determinism on the grounds of features we think we know the Final Theory must have.
3.2 Experience
Determinism could perhaps also receive direct support—confirmation in the sense of probability-raising, not proof—from experience and experiment. For theories (i.e., potential laws of nature) of the sort we are used to in physics, it is typically the case that if they are deterministic, then to the extent that one can perfectly isolate a system and repeatedly impose identical starting conditions, the subsequent behavior of the systems should also be identical. And in broad terms, this is the case in many domains we are familiar with. Your computer starts up every time you turn it on, and (if you have not changed any files, have no anti-virus software, re-set the date to the same time before shutting down, and so on …) always in exactly the same way, with the same speed and resulting state (until the hard drive fails). The light comes on exactly 32 µsec after the switch closes (until the day the bulb fails). These cases of repeated, reliable behavior obviously require some serious ceteris paribus clauses, are never perfectly identical, and always subject to catastrophic failure at some point. But we tend to think that for the small deviations, probably there are explanations for them in terms of different starting conditions or failed isolation, and for the catastrophic failures, definitely there are explanations in terms of different conditions.
There have even been studies of paradigmatically “chancy” phenomena such as coin-flipping, which show that if starting conditions can be precisely controlled and outside interferences excluded, identical behavior results (see Diaconis, Holmes & Montgomery 2004). Most of these bits of evidence for determinism no longer seem to cut much ice, however, because of faith in quantum mechanics and its indeterminism. Indeterminist physicists and philosophers are ready to acknowledge that macroscopic repeatability is usually obtainable, where phenomena are so large-scale that quantum stochasticity gets washed out. But they would maintain that this repeatability is not to be found in experiments at the microscopic level, and also that at least some failures of repeatability (in your hard drive, or coin-flipping experiments) are genuinely due to quantum indeterminism, not just failures to isolate properly or establish identical initial conditions.
If quantum theories were unquestionably indeterministic, and deterministic theories guaranteed repeatability of a strong form, there could conceivably be further experimental input on the question of determinism's truth or falsity. Unfortunately, the existence of Bohmian quantum theories casts strong doubt on the former point, while chaos theory casts strong doubt on the latter. More will be said about each of these complications below.
3.3 Determinism and Chaos
If the world were governed by strictly deterministic laws, might it still look as though indeterminism reigns? This is one of the difficult questions that chaos theory raises for the epistemology of determinism.
A deterministic chaotic system has, roughly speaking, two salient features: (i) the evolution of the system over a long time period effectively mimics a random or stochastic process—it lacks predictability or computability in some appropriate sense; (ii) two systems with nearly identical initial states will have radically divergent future developments, within a finite (and typically, short) timespan. We will use “randomness” to denote the first feature, and “sensitive dependence on initial conditions” (SDIC) for the latter. Definitions of chaos may focus on either or both of these properties; Batterman (1993) argues that only (ii) provides an appropriate basis for defining chaotic systems.
A simple and very important example of a chaotic system in both randomness and SDIC terms is the Newtonian dynamics of a pool table with a convex obstacle (or obstacles) (Sinai 1970 and others). See Figure 1:
Billiard table with convex obstacle
Figure 1: Billiard table with convex obstacle
The usual idealizing assumptions are made: no friction, perfectly elastic collisions, no outside influences. The ball's trajectory is determined by its initial position and direction of motion. If we imagine a slightly different initial direction, the trajectory will at first be only slightly different. And collisions with the straight walls will not tend to increase very rapidly the difference between trajectories. But collisions with the convex object will have the effect of amplifying the differences. After several collisions with the convex body or bodies, trajectories that started out very close to one another will have become wildly different—SDIC.
In the example of the billiard table, we know that we are starting out with a Newtonian deterministic system—that is how the idealized example is defined. But chaotic dynamical systems come in a great variety of types: discrete and continuous, 2-dimensional, 3-dimensional and higher, particle-based and fluid-flow-based, and so on. Mathematically, we may suppose all of these systems share SDIC. But generally they will also display properties such as unpredictability, non-computability, Kolmogorov-random behavior, and so on—at least when looked at in the right way, or at the right level of detail. This leads to the following epistemic difficulty: if, in nature, we find a type of system that displays some or all of these latter properties, how can we decide which of the following two hypotheses is true?
1. The system is governed by genuinely stochastic, indeterministic laws (or by no laws at all), i.e., its apparent randomness is in fact real randomness.
2. The system is governed by underlying deterministic laws, but is chaotic.
In other words, once one appreciates the varieties of chaotic dynamical systems that exist, mathematically speaking, it starts to look difficult—maybe impossible—for us to ever decide whether apparently random behavior in nature arises from genuine stochasticity, or rather from deterministic chaos. Patrick Suppes (1993, 1996) argues, on the basis of theorems proven by Ornstein (1974 and later) that “There are processes which can equally well be analyzed as deterministic systems of classical mechanics or as indeterministic semi-Markov processes, no matter how many observations are made.” And he concludes that “Deterministic metaphysicians can comfortably hold to their view knowing they cannot be empirically refuted, but so can indeterministic ones as well.” (Suppes (1993), p. 254)
There is certainly an interesting problem area here for the epistemology of determinism, but it must be handled with care. It may well be true that there are some deterministic dynamical systems that, when viewed properly, display behavior indistinguishable from that of a genuinely stochastic process. For example, using the billiard table above, if one divides its surface into quadrants and looks at which quadrant the ball is in at 30-second intervals, the resulting sequence is no doubt highly random. But this does not mean that the same system, when viewed in a different way (perhaps at a higher degree of precision) does not cease to look random and instead betray its deterministic nature. If we partition our billiard table into squares 2 centimeters a side and look at which quadrant the ball is in at .1 second intervals, the resulting sequence will be far from random. And finally, of course, if we simply look at the billiard table with our eyes, and see it as a billiard table, there is no obvious way at all to maintain that it may be a truly random process rather than a deterministic dynamical system. (See Winnie (1996) for a nice technical and philosophical discussion of these issues. Winnie explicates Ornstein's and others' results in some detail, and disputes Suppes' philosophical conclusions.)
The dynamical systems usually studied under the label of “chaos” are usually either purely abstract, mathematical systems, or classical Newtonian systems. It is natural to wonder whether chaotic behavior carries over into the realm of systems governed by quantum mechanics as well. Interestingly, it is much harder to find natural correlates of classical chaotic behavior in true quantum systems. (See Gutzwiller (1990)). Some, at least, of the interpretive difficulties of quantum mechanics would have to be resolved before a meaningful assessment of chaos in quantum mechanics could be achieved. For example, SDIC is hard to find in the Schrödinger evolution of a wavefunction for a system with finite degrees of freedom; but in Bohmian quantum mechanics it is handled quite easily on the basis of particle trajectories. (See Dürr, Goldstein and Zhangì (1992)).
The popularization of chaos theory in the past decade and a half has perhaps made it seem self-evident that nature is full of genuinely chaotic systems. In fact, it is far from self-evident that such systems exist, other than in an approximate sense. Nevertheless, the mathematical exploration of chaos in dynamical systems helps us to understand some of the pitfalls that may attend our efforts to know whether our world is genuinely deterministic or not.
3.4 Metaphysical arguments
Let us suppose that we shall never have the Final Theory of Everything before us—at least in our lifetime—and that we also remain unclear (on physical/experimental grounds) as to whether that Final Theory will be of a type that can or cannot be deterministic. Is there nothing left that could sway our belief toward or against determinism? There is, of course: metaphysical argument. Metaphysical arguments on this issue are not currently very popular. But philosophical fashions change at least twice a century, and grand systemic metaphysics of the Leibnizian sort might one day come back into favor. Conversely, the anti-systemic, anti-fundamentalist metaphysics propounded by Cartwright (1999) might also come to predominate. As likely as not, for the foreseeable future metaphysical argument may be just as good a basis on which to discuss determinism's prospects as any arguments from mathematics or physics.
4. The Status of Determinism in Physical Theories
John Earman's Primer on Determinism (1986) remains the richest storehouse of information on the truth or falsity of determinism in various physical theories, from classical mechanics to quantum mechanics and general relativity. (See also his recent update on the subject, “Aspects of Determinism in Modern Physics” (2007)). Here I will give only a brief discussion of some key issues, referring the reader to Earman (1986) and other resources for more detail. Figuring out whether well-established theories are deterministic or not (or to what extent, if they fall only a bit short) does not do much to help us know whether our world is really governed by deterministic laws; all our current best theories, including General Relativity and the Standard Model of particle physics, are too flawed and ill-understood to be mistaken for anything close to a Final Theory. Nevertheless, as Earman (1986) stressed, the exploration is very valuable because of the way it enriches our understanding of the richness and complexity of determinism.
4.1 Classical mechanics
Despite the common belief that classical mechanics (the theory that inspired Laplace in his articulation of determinism) is perfectly deterministic, in fact the theory is rife with possibilities for determinism to break down. One class of problems arises due to the absence of an upper bound on the velocities of moving objects. Below we see the trajectory of an object that is accelerated unboundedly, its velocity becoming in effect infinite in a finite time. See Figure 2:
object accelerates to reach infinity
Figure 2: An object accelerates so as to reach spatial infinity in a finite time
By the time t = t*, the object has literally disappeared from the world—its world-line never reaches the t = t* surface. (Never mind how the object gets accelerated in this way; there are mechanisms that are perfectly consistent with classical mechanics that can do the job. In fact, Xia (1992) showed that such acceleration can be accomplished by gravitational forces from only 5 finite objects, without collisions. No mechanism is shown in these diagrams.) This “escape to infinity,” while disturbing, does not yet look like a violation of determinism. But now recall that classical mechanics is time-symmetric: any model has a time-inverse, which is also a consistent model of the theory. The time-inverse of our escaping body is playfully called a “space invader.”
space invader comes from infinity
Figure 3: A ‘space invader’ comes in from spatial infinity
Clearly, a world with a space invader does fail to be deterministic. Before t = 0, there was nothing in the state of things to enable the prediction of the appearance of the invader at t = 0+.[2] One might think that the infinity of space is to blame for this strange behavior, but this is not obviously correct. In finite, “rolled-up” or cylindrical versions of Newtonian space-time space-invader trajectories can be constructed, though whether a “reasonable” mechanism to power them exists is not clear.[3]
A second class of determinism-breaking models can be constructed on the basis of collision phenomena. The first problem is that of multiple-particle collisions for which Newtonian particle mechanics simply does not have a prescription for what happens. (Consider three identical point-particles approaching each other at 120 degree angles and colliding simultaneously. That they bounce back along their approach trajectories is possible; but it is equally possible for them to bounce in other directions (again with 120 degree angles between their paths), so long as momentum conservation is respected.)
Moreover, there is a burgeoning literature of physical or quasi-physical systems, usually set in the context of classical physics, that carry out supertasks (see Earman and Norton (1998) and the entry on supertasks for a review). Frequently, the puzzle presented is to decide, on the basis of the well-defined behavior before time t = a, what state the system will be in at t = a itself. A failure of CM to dictate a well-defined result can then be seen as a failure of determinism.
In supertasks, one frequently encounters infinite numbers of particles, infinite (or unbounded) mass densities, and other dubious infinitary phenomena. Coupled with some of the other breakdowns of determinism in CM, one begins to get a sense that most, if not all, breakdowns of determinism rely on some combination of the following set of (physically) dubious mathematical notions: {infinite space; unbounded velocity; continuity; point-particles; singular fields}. The trouble is, it is difficult to imagine any recognizable physics (much less CM) that eschews everything in the set.
Finally, an elegant example of apparent violation of determinism in classical physics has been created by John Norton (2003). As illustrated in Figure 4, imagine a ball sitting at the apex of a frictionless dome whose equation is specified as a function of radial distance from the apex point. This rest-state is our initial condition for the system; what should its future behavior be? Clearly one solution is for the ball to remain at rest at the apex indefinitely.
Norton's dome
Figure 4: A ball may spontaneously start sliding down this dome, with no violation of Newton's laws.
(Reproduced courtesy of John D. Norton and Philosopher's Imprint)
But curiously, this is not the only solution under standard Newtonian laws. The ball may also start into motion sliding down the dome—at any moment in time, and in any radial direction. This example displays “uncaused motion” without, Norton argues, any violation of Newton's laws, including the First Law. And it does not, unlike some supertask examples, require an infinity of particles. Still, many philosophers are uncomfortable with the moral Norton draws from his dome example, and point out reasons for questioning the dome's status as a Newtonian system (see e.g. Malament (2007)).
4.2 Special Relativistic physics
Two features of special relativistic physics make it perhaps the most hospitable environment for determinism of any major theoretical context: the fact that no process or signal can travel faster than the speed of light, and the static, unchanging spacetime structure. The former feature, including a prohibition against tachyons (hypothetical particles travelling faster than light)[4]), rules out space invaders and other unbounded-velocity systems. The latter feature makes the space-time itself nice and stable and non-singular—unlike the dynamic space-time of General Relativity, as we shall see below. For source-free electromagnetic fields in special-relativistic space-time, a nice form of Laplacean determinism is provable. Unfortunately, interesting physics needs more than source-free electromagnetic fields. Earman (1986) ch. IV surveys in depth the pitfalls for determinism that arise once things are allowed to get more interesting (e.g. by the addition of particles interacting gravitationally).
4.3 General Relativity (GTR)
Defining an appropriate form of determinism for the context of general relativistic physics is extremely difficult, due to both foundational interpretive issues and the plethora of weirdly-shaped space-time models allowed by the theory's field equations. The simplest way of treating the issue of determinism in GTR would be to state flatly: determinsim fails, frequently, and in some of the most interesting models. To leave it at that would however be to miss an important opportunity to use determinism to probe physical and philosophical issues of great importance (a use of determinism stressed frequently by Earman). Here we will briefly describe some of the most important challenges that arise for determinism, directing the reader yet again to Earman (1986), and also Earman (1995) for more depth.
4.3.1 Determinism and manifold points
In GTR, we specify a model of the universe by giving a triple of three mathematical objects, <M, g,T>. M represents a continuous “manifold”: that means a sort of unstructured space (-time), made up of individual points and having smoothness or continuity, and dimensionality (usually, 4-dimensional), but no further structure. What is the further structure a space-time needs? Typically, at least, we expect the time-direction to be distinguished from space-directions; and we expect there to be well-defined distances between distinct points; and also a determinate geometry (making certain continuous paths in M be straight lines, etc.). All of this extra structure is coded into g. So M and g together represent space-time. T represents the matter and energy content distributed around in space-time (if any, of course).
For mathematical reasons not relevant here, it turns out to be possible to take a given model spacetime and perform a mathematical operation called a “hole diffeomorphism” h* on it; the diffeomorphism's effect is to shift around the matter content T and the metric g relative to the continuous manifold M.[5] If the diffeomorphism is chosen appropriately, it can move around T and g after a certain time t = 0, but leave everything alone before that time. Thus, the new model represents the matter content (now h* T) and the metric (h*g) as differently located relative to the points of M making up space-time. Yet, the new model is also a perfectly valid model of the theory. This looks on the face of it like a form of indeterminism: GTR's equations do not specify how things will be distributed in space-time in the future, even when the past before a given time t is held fixed. See Figure 5:
Hole diffeomorphism shifts contents of spacetime
Figure 5: “Hole” diffeomorphism shifts contents of spacetime
Usually the shift is confined to a finite region called the hole (for historical reasons). Then it is easy to see that the state of the world at time t = 0 (and all the history that came before) does not suffice to fix whether the future will be that of our first model, or its shifted counterpart in which events inside the hole are different.
This is a form of indeterminism first highlighted by Earman and Norton (1987) as an interpretive philosophical difficulty for realism about GTR's description of the world, especially the point manifold M. They showed that realism about the manifold as a part of the furniture of the universe (which they called “manifold substantivalism”) commits us to a radical, automatic indeterminism in GTR, and they argued that this is unacceptable. (See the hole argument and Hoefer (1996) for one response on behalf of the space-time realist, and discussion of other responses.) For now, we will simply note that this indeterminism, unlike most others we are discussing in this section, is empirically vacuous: our two models <M, g, T> and the shifted model <M, h*g, h*T> are empirically indistinguishable.
4.3.2 Singularities
The separation of space-time structures into manifold and metric (or connection) facilitates mathematical clarity in many ways, but also opens up Pandora's box when it comes to determinism. The indeterminism of the Earman and Norton hole argument is only the tip of the iceberg; singularities make up much of the rest of the berg. In general terms, a singularity can be thought of as a “place where things go bad” in one way or another in the space-time model. For example, near the center of a Schwarzschild black hole, curvature increases without bound, and at the center itself it is undefined, which means that Einstein's equations cannot be said to hold, which means (arguably) that this point does not exist as a part of the space-time at all! Some specific examples are clear, but giving a general definition of a singularity, like defining determinism itself in GTR, is a vexed issue (see Earman (1995) for an extended treatment; Callender and Hoefer (2001) gives a brief overview). We will not attempt here to catalog the various definitions and types of singularity.
Different types of singularity bring different types of threat to determinism. In the case of ordinary black holes, mentioned above, all is well outside the so- called “event horizon”, which is the spherical surface defining the black hole: once a body or light signal passes through the event horizon to the interior region of the black hole, it can never escape again. Generally, no violation of determinism looms outside the event horizon; but what about inside? Some black hole models have so-called “Cauchy horizons” inside the event horizon, i.e., surfaces beyond which determinism breaks down.
Another way for a model spacetime to be singular is to have points or regions go missing, in some cases by simple excision. Perhaps the most dramatic form of this involves taking a nice model with a space-like surface t = E (i.e., a well-defined part of the space-time that can be considered “the state state of the world at time E”), and cutting out and throwing away this surface and all points temporally later. The resulting spacetime satisfies Einstein's equations; but, unfortunately for any inhabitants, the universe comes to a sudden and unpredictable end at time E. This is too trivial a move to be considered a real threat to determinism in GTR; we can impose a reasonable requirement that space-time not “run out” in this way without some physical reason (the spacetime should be “maximally extended”). For discussion of precise versions of such a requirement, and whether they succeed in eliminating unwanted singularities, see Earman (1995, chapter 2).
The most problematic kinds of singularities, in terms of determinism, are naked singularities (singularities not hidden behind an event horizon). When a singularity forms from gravitational collapse, the usual model of such a process involves the formation of an event horizon (i.e. a black hole). A universe with an ordinary black hole has a singularity, but as noted above, (outside the event horizon at least) nothing unpredictable happens as a result. A naked singularity, by contrast, has no such protective barrier. In much the way that anything can disappear by falling into an excised-region singularity, or appear out of a white hole (white holes themselves are, in fact, technically naked singularities), there is the worry that anything at all could pop out of a naked singularity, without warning (hence, violating determinism en passant). While most white hole models have Cauchy surfaces and are thus arguably deterministic, other naked singularity models lack this property. Physicists disturbed by the unpredictable potentialities of such singularities have worked to try to prove various cosmic censorship hypotheses that show—under (hopefully) plausible physical assumptions—that such things do not arise by stellar collapse in GTR (and hence are not liable to come into existence in our world). To date no very general and convincing forms of the hypothesis have been proven, so the prospects for determinism in GTR as a mathematical theory do not look terribly good.
4.4 Quantum mechanics
As indicated above, QM is widely thought to be a strongly non-deterministic theory. Popular belief (even among most physicists) holds that phenomena such as radioactive decay, photon emission and absorption, and many others are such that only a probabilistic description of them can be given. The theory does not say what happens in a given case, but only says what the probabilities of various results are. So, for example, according to QM the fullest description possible of a radium atom (or a chunk of radium, for that matter), does not suffice to determine when a given atom will decay, nor how many atoms in the chunk will have decayed at any given time. The theory gives only the probabilities for a decay (or a number of decays) to happen within a given span of time. Einstein and others perhaps thought that this was a defect of the theory that should eventually be removed, by a supplemental hidden variable theory[6] that restores determinism; but subsequent work showed that no such hidden variables account could exist. At the microscopic level the world is ultimately mysterious and chancy.
So goes the story; but like much popular wisdom, it is partly mistaken and/or misleading. Ironically, quantum mechanics is one of the best prospects for a genuinely deterministic theory in modern times! Even more than in the case of GTR and the hole argument, everything hinges on what interpretational and philosophical decisions one adopts. The fundamental law at the heart of non-relativistic QM is the Schrödinger equation. The evolution of a wavefunction describing a physical system under this equation is normally taken to be perfectly deterministic.[7] If one adopts an interpretation of QM according to which that's it—i.e., nothing ever interrupts Schrödinger evolution, and the wavefunctions governed by the equation tell the complete physical story—then quantum mechanics is a perfectly deterministic theory. There are several interpretations that physicists and philosophers have given of QM which go this way. (See the entry on quantum mechanics.)
More commonly—and this is part of the basis for the popular wisdom—physicists have resolved the quantum measurement problem by postulating that some process of “collapse of the wavefunction” occurs from time to time (particularly during measurements and observations) that interrupts Schrödinger evolution. The collapse process is usually postulated to be indeterministic, with probabilities for various outcomes, via Born's rule, calculable on the basis of a system's wavefunction. The once-standard, Copenhagen interpretation of QM posits such a collapse. It has the virtue of solving certain paradoxes such as the infamous Schrödinger's cat paradox, but few philosophers or physicists can take it very seriously unless they are either idealists or instrumentalists. The reason is simple: the collapse process is not physically well-defined, and feels too ad hoc to be a fundamental part of nature's laws.[8]
In 1952 David Bohm created an alternative interpretation of QM—perhaps better thought of as an alternative theory—that realizes Einstein's dream of a hidden variable theory, restoring determinism and definiteness to micro-reality. In Bohmian quantum mechanics, unlike other interpretations, it is postulated that all particles have, at all times, a definite position and velocity. In addition to the Schrödinger equation, Bohm posited a guidance equation that determines, on the basis of the system's wavefunction and particles' initial positions and velocities, what their future positions and velocities should be. As much as any classical theory of point particles moving under force fields, then, Bohm's theory is deterministic. Amazingly, he was also able to show that, as long as the statistical distribution of initial positions and velocities of particles are chosen so as to meet a “quantum equilibrium” condition, his theory is empirically equivalent to standard Copenhagen QM. In one sense this is a philosopher's nightmare: with genuine empirical equivalence as strong as Bohm obtained, it seems experimental evidence can never tell us which description of reality is correct. (Fortunately, we can safely assume that neither is perfectly correct, and hope that our Final Theory has no such empirically equivalent rivals.) In other senses, the Bohm theory is a philosopher's dream come true, eliminating much (but not all) of the weirdness of standard QM and restoring determinism to the physics of atoms and photons. The interested reader can find out more from the link above, and references therein.
This small survey of determinism's status in some prominent physical theories, as indicated above, does not really tell us anything about whether determinism is true of our world. Instead, it raises a couple of further disturbing possibilities for the time when we do have the Final Theory before us (if such time ever comes): first, we may have difficulty establishing whether the Final Theory is deterministic or not—depending on whether the theory comes loaded with unsolved interpretational or mathematical puzzles. Second, we may have reason to worry that the Final Theory, if indeterministic, has an empirically equivalent yet deterministic rival (as illustrated by Bohmian quantum mechanics.)
5. Chance and Determinism
Some philosophers maintain that if determinism holds in our world, then there are no objective chances in our world. And often the word ‘chance’ here is taken to be synonymous with 'probability', so these philosophers maintain that there are no non-trivial objective probabilities for events in our world. (The caveat “non-trivial” is added here because on some accounts all future events that actually happen have probability, conditional on past history, equal to 1, and future events that do not happen have probability equal to zero. Non-trivial probabilities are probabilities strictly between zero and one.) Conversely, it is often held, if there are laws of nature that are irreducibly probabilistic, determinism must be false. (Some philosophers would go on to add that such irreducibly probabilistic laws are the basis of whatever genuine objective chances obtain in our world.)
The discussion of quantum mechanics in section 4 shows that it may be difficult to know whether a physical theory postulates genuinely irreducible probabilistic laws or not. If a Bohmian version of QM is correct, then the probabilities dictated by the Born rule are not irreducible. If that is the case, should we say that the probabilities dictated by quantum mechanics are not objective? Or should we say that we need to distinguish ‘chance’ and ‘probabillity’ after all—and hold that not all objective probabilities should be thought of as objective chances? The first option may seem hard to swallow, given the many-decimal-place accuracy with which such probability-based quantities as half-lives and cross-sections can be reliably predicted and verified experimentally with QM.
Whether objective chance and determinism are really incompatible or not may depend on what view of the nature of laws is adopted. On a “pushy explainers” view of laws such as that defended by Maudlin (2007), probabilistic laws are interpreted as irreducible dynamical transition-chances between allowed physical states, and the incompatibility of such laws with determinism is immediate. But what should a defender of a Humean view of laws, such as the BSA theory (section 2.4 above), say about probabilistic laws? The first thing that needs to be done is explain how probabilistic laws can fit into the BSA account at all, and this requires modification or expansion of the view, since as first presented the only candidates for laws of nature are true universal generalizations. If ‘probability’ were a univocal, clearly understood notion then this might be simple: We allow universal generalizations whose logical form is something like: “Whenever conditions Y obtain, Pr(A) = x”. But it is not at all clear how the meaning of ‘Pr’ should be understood in such a generalization; and it is even less clear what features the Humean pattern of actual events must have, for such a generalization to be held true. (See the entry on interpretations of probability and Lewis (1994).)
Humeans about laws believe that what laws there are is a matter of what patterns are there to be discerned in the overall mosaic of events that happen in the history of the world. It seems plausible enough that the patterns to be discerned may include not only strict associations (whenever X, Y), but also stable statistical associations. If the laws of nature can include either sort of association, a natural question to ask seems to be: why can't there be non-probabilistic laws strong enough to ensure determinism, and on top of them, probabilistic laws as well? If a Humean wanted to capture the laws not only of fundamental theories, but also non-fundamental branches of physics such as (classical) statistical mechanics, such a peaceful coexistence of deterministic laws plus further probabilistic laws would seem to be desirable. Loewer (2004) argues that this peaceful coexistence can be achieved within Lewis' version of the BSA account of laws.
6. Determinism and Human Action
In the introduction, we noted the threat that determinism seems to pose to human free agency. It is hard to see how, if the state of the world 1000 years ago fixes everything I do during my life, I can meaningfully say that I am a free agent, the author of my own actions, which I could have freely chosen to perform differently. After all, I have neither the power to change the laws of nature, nor to change the past! So in what sense can I attribute freedom of choice to myself?
Philosophers have not lacked ingenuity in devising answers to this question. There is a long tradition of compatibilists arguing that freedom is fully compatible with physical determinism. Hume went so far as to argue that determinism is a necessary condition for freedom—or at least, he argued that some causality principle along the lines of “same cause, same effect” is required. There have been equally numerous and vigorous responses by those who are not convinced. Can a clear understanding of what determinism is, and how it tends to succeed or fail in real physical theories, shed any light on the controversy?
Physics, particularly 20th century physics, does have one lesson to impart to the free will debate; a lesson about the relationship between time and determinism. Recall that we noticed that the fundamental theories we are familiar with, if they are deterministic at all, are time-symmetrically deterministic. That is, earlier states of the world can be seen as fixing all later states; but equally, later states can be seen as fixing all earlier states. We tend to focus only on the former relationship, but we are not led to do so by the theories themselves.
Nor does 20th (21st) -century physics countenance the idea that there is anything ontologically special about the past, as opposed to the present and the future. In fact, it fails to use these categories in any respect, and teaches that in some senses they are probably illusory.[9] So there is no support in physics for the idea that the past is “fixed” in some way that the present and future are not, or that it has some ontological power to constrain our actions that the present and future do not have. It is not hard to uncover the reasons why we naturally do tend to think of the past as special, and assume that both physical causation and physical explanation work only in the past present/future direction (see the entry on thermodynamic asymmetry in time). But these pragmatic matters have nothing to do with fundamental determinism. If we shake loose from the tendency to see the past as special, when it comes to the relationships of determinism, it may prove possible to think of a deterministic world as one in which each part bears a determining—or partial-determining—relation to other parts, but in which no particular part (i.e., region of space-time) has a special, stronger determining role than any other. Hoefer (2002) uses these considerations to argue in a novel way for the compatiblity of determinism with human free agency.
• Batterman, R. B., 1993, “Defining Chaos,” Philosophy of Science, 60: 43–66.
• Bishop, R. C., 2002, “Deterministic and Indeterministic Descriptions,” in Between Chance and Choice, H. Atmanspacher and R. Bishop (eds.), Imprint Academic, 5–31.
• Butterfield, J., 1998, “Determinism and Indeterminism,” in Routledge Encyclopedia of Philosophy, E. Craig (ed.), London: Routledge.
• Callender, C., 2000, “Shedding Light on Time,” Philosophy of Science (Proceedings of PSA 1998), 67: S587–S599.
• Callender, C., and Hoefer, C., 2001, “Philosophy of Space-time Physics,” in The Blackwell Guide to the Philosophy of Science, P. Machamer and M. Silberstein (eds), Oxford: Blackwell, pp. 173–198.
• Cartwright, N., 1999, The Dappled World, Cambridge: Cambridge University Press.
• Dupré, J., 2001, Human Nature and the Limits of Science, Oxford: Oxford University Press.
• Dürr, D., Goldstein, S., and Zanghì, N., 1992, “Quantum Chaos, Classical Randomness, and Bohmian Mechanics,” Journal of Statistical Physics, 68: 259–270. [Preprint available online in gzip'ed Postscript.]
• Earman, J., 1984: “Laws of Nature: The Empiricist Challenge,” in R. J. Bogdan, ed.,'D.H.Armstrong', Dortrecht: Reidel, pp. 191–223.
• Earman, J., and J. Norton, 1987, “What Price Spacetime Substantivalism: the Hole Story,” British Journal for the Philosophy of Science, 38: 515–525.
• Earman, J. and J. D. Norton, 1998, “Comments on Laraudogoitia's ‘Classical Particle Dynamics, Indeterminism and a Supertask’,” British Journal for the Philosophy of Science, 49: 123–133.
• Ford, J., 1989, “What is chaos, the we should be mindful of it?” in The New Physics, P. Davies (ed.), Cambridge: Cambridge University Press, 348–372.
• Gisin, N., 1991, “Propensities in a Non-Deterministic Physics”, Synthese, 89: 287–297.
• Gutzwiller, M., 1990, Chaos in Classical and Quantum Mechanics, New York: Springer-Verlag.
• Hitchcock, C., 1999, “Contrastive Explanation and the Demons of Determinism,” British Journal of the Philosophy of Science, 50: 585–612.
• Hoefer, C., 1996, “The Metaphysics of Spacetime Substantivalism,” The Journal of Philosophy, 93: 5–27.
• Hoefer, C., 2002, “Freedom From the Inside Out,” in Time, Reality and Experience, C. Callender (ed.), Cambridge: Cambridge University Press, pp. 201–222.
• Hoefer, C., 2002b, “For Fundamentalism,” Philosophy of Science v. 70, no. 5 (PSA 2002 Proceedings), pp. 1401–1412.
• Hutchison, K. 1993, “Is Classical Mechanics Really Time-reversible and Deterministic?” British Journal of the Philosophy of Science, 44: 307–323.
• Laplace, P., 1820, Essai Philosophique sur les Probabilités forming the introduction to his Théorie Analytique des Probabilités, Paris: V Courcier; repr. F.W. Truscott and F.L. Emory (trans.), A Philosophical Essay on Probabilities, New York: Dover, 1951 .
• Leiber, T., 1998, “On the Actual Impact of Deterministic Chaos,” Synthese, 113: 357–379.
• Lewis, D., 1994, “Humean Supervenience Debugged,” Mind, 103: 473–490.
• Loewer, B., 2004, “Determinism and Chance,” Studies in History and Philosophy of Modern Physics, 32: 609–620.
• Malament, D., 2008, “Norton's Slippery Slope,” Philosophy of Science, vol. 75, no. 4, pp. 799–816.
• Maudlin, T. 2007, The Metaphysics Within Physics, Oxford: Oxford University Press.
• Melia, J. 1999, “Holes, Haecceitism and Two Conceptions od Determinism,” British Journal of the Philosophy of Science, 50: 639–664.
• Mellor, D. H. 1995, The Facts of Causation, London: Routledge.
• Norton, J.D., 2003, “Causation as Folk Science,” Philosopher's Imprint, 3 (4): [Available online].
• Ornstein, D. S., 1974, Ergodic Theory, Randomness, and Dynamical Systems, New Haven: Yale University Press.
• Ruelle, D., 1991, Chance and Chaos, London: Penguin.
• Russell, B., 1912, “On the Notion of Cause,” Proceedings of the Aristotelian Society, 13: 1–26.
• Shanks, N., 1991, “Probabilistic physics and the metaphysics of time,” South African Journal of Philosophy, 10: 37–44.
• Sinai, Ya.G., 1970, “Dynamical systems with elastic reflections,” Russ. Math. Surveys 25: 137–189.
• Suppes, P. and M. Zanotti, 1996, Foundations of Probability with Applications. New York: Cambridge University Press.
• Suppes, P., 1999, “The Noninvariance of Deterministic Causal Models,” Synthese, 121: 181–198.
• van Fraassen, B., 1989, Laws and Symmetry, Oxford: Clarendon Press.
• Van Kampen, N. G., 1991, “Determinism and Predictability,” Synthese, 89: 273–281.
• Winnie, J. A., 1996, “Deterministic Chaos and the Nature of Chance,” in The Cosmos of Science—Essays of Exploration, J. Earman and J. Norton (eds.), Pittsburgh: University of Pitsburgh Press, pp. 299–324.
• Xia, Z., 1992, “The existence of noncollision singularities in newtonian systems,” Annals of Mathematics, 135: 411–468.
Academic Tools
sep man icon How to cite this entry.
Other Internet Resources
Related Entries
compatibilism | free will | Hume, David | incompatibilism: (nondeterministic) theories of free will | laws of nature | Popper, Karl | probability, interpretations of | quantum mechanics | quantum mechanics: Bohmian mechanics | Russell, Bertrand | space and time: supertasks | space and time: the hole argument | time: thermodynamic asymmetry in
The author would like to acknowledge the invaluable help of John Norton in the preparation of this entry. Thanks also to A. Ilhamy Amiry for bringing to my attention some errors in an earlier version of this entry.
Copyright © 2010 by
Carl Hoefer <>
Please Read How You Can Help Keep the Encyclopedia Free |
1b2ea8b8d3bf32c5 | Selasa, 13 Januari 2009
The Mathematics Concept, Problems Of Mathematics, And The Solution Of Mathematics Which Still Wearied Until This Time.
Calculus (from Latin mean “small rock) as the branch of mathematics science which including limit, differential, integral, and uncountable deret. Calculus has large application an science and technique and used to solve the complex problems where only using technique of elementary algebra is doesn’t enough to solved. Calculus has two interest branchs, differential calculus and integral calculus which interacted each other by the base theorem of Calculus.
The history of the expand of Calculus could saw by some period that is ancient era, middle era, and modern era. On ancient era, some idea about integral has been expanded well and systematically. To counting volume and wide which is the interest function from integral calculus can traced again on Egypt Papyrus Moscow (c. 1800 BC) where Egypt people count volume by frustrum pyramid. Archimedes expand his idea more and more again and getting heuristic which look like integral calculus.
On the middle era, Indian mathematician, Ayabhata using the concept of uncountable little on 499 and expressed the problems of astronomy on the basic differential equation shape. On the modern era, the independence discovery happened on the early of 17th century in Japan by mathematician like as Seki Kowa. In Europe, some mathematician like John Wallis and Isaac Barrow giving breakthrough on Calculus. James Gregory prove a special case from basic theorem of calculus on 1668.
Gottfried Wilhelm Leibniz for the early was alleged plagiarizing the masterpiece of Sir Isaac Newton which never published, but until this time often assumed by the contributor of calculus which also used to get understanding which more detail about space, time, and motion. For many centuries, mathematician and philosopher try to solve the paradox which embosoming the divided of zero number or the count if uncountable deret. An ancient Greek philosopher giving some famous example like paradox of Zeno. Calculus giving solution, specially on limit and uncountable deret, which then success to solve the paradox.
Phi number (π)
Phi is a number which still often be asked, phi = 22/7 or 3,14 with the number with counting of circle wide, the circle of circle or which related with circle. Nowadays phi = 22/7 = 3.1428571428571428571428571428571………..often rounded upbecome 3,14 is comparison size measure circle of circle with radian diameter. It means that if we count the circle of circle then compared with the diameter so the result is 22 related with 7. the phi number like that called irrational number which including to the real number category.
The Mathematics Concept, Problems Of Mathematics, And The Solution Of Mathematics Which Didn’t Wearied by This Time.
Sabak Numerator
Since wearing with using gravel which residing in to the top and bottom of winnow line marked by Romawi number according to the columns. Every gravel in the bottom of line in column on extreme right counting as a unit, and every gravel in the up of line valued by five. If the count valued as 10, a gravel bring in to the right. The table in the bottom view the count equal to 256.317 sheeps.
The Mathematics Concept, Problems Of Mathematics, And The Solution Of Mathematics Which Has No Relation With Mathematic.
I haven’t found the mathematics Concept, problems of Mathematics, and the Solution of Mathematics which has no Relation with Mathematics.
Reference :
Jumat, 09 Januari 2009
The Foundation of Matematics
The Foundation of Matematics consist of mathematical logic, axiomatic set theory, proof theory, model theory, and recursion theory.Thre are related to philosofi of matematics.
Philosofi of matematics consist of :
“Platonists, such as Kurt Gödel, hold that numbers are abstract, necessarily existing objects, independent of the human mind”[1]
“Formalists, such as David Hilbert (1862–1943), hold that mathematics is no more or less than mathematical language. It is simply a series of games...” [1]
The Foundation Of Matematics :
1) Strong : example geometri in Ingland to be foundation of matematics
2) Viewless :
Epistimologi : learn the science of mathematics resources.(Emanuel Khan)
Based on Critical Thinking
1. Syntetic apriory : not see the person can understand. The science obtained from Kontradiktion.
2. Analitic.
Minggu, 30 November 2008
About Diederik Korteweg and Gustav de Vries
Diederik Korteweg is a dutch mathematician. Diederik Korteweg's father is a judge in 's-Hertogenbosch in the south of The Netherlands.His father send to school him in a military academy.But Diederik Korteweg feel is not be balmy learnt over there.So,he decided to make against a military career and, making the first of his changes of direction, he began his studies at the Polytechnical School of Delft.Because his love of mathematics,he decided to concentrated to mathematics.And than he become a teacher in a high school.One of his students is Gustav de Vries.
Gustav de Vries is a dutch mathematician.Gustav de Vries with Diederik Korteweg discovered the Korteweg-de Vries equation (KdV equation).
The history of the KdV equation started with experiments by John Scott Russell in 1834, followed by theoretical investigations by Lord Rayleigh and Joseph Boussinesq around 1870 and, finally, Korteweg and De Vries in 1895.
The KdV equation has several connections to physical problems. In addition to being the governing equation of the string in the Fermi–Pasta–Ulam problem in the continuum limit, it approximately describes the evolution of long, one-dimensional waves in many physical settings, including:
• shallow-water waves with weakly non-linear restoring forces,
• long internal waves in a density-stratified ocean,
• ion-acoustic waves in a plasma,
• acoustic waves on a crystal lattice,
• and more.
The KdV equation can also be solved using the inverse scattering transform such as those applied to the non-linear Schrödinger equation.
About Leonardo Fibonacci
Leonardo fibonacci is Italian mathematician that introducing arabic numeral system. His father Guglielmo was nicknamed Bonaccio.Leonardo's mother, Alessandra, died when he was nine years old.
Leonardo do the travel to north afrika but on the way Leonardo find that number system arab is more practically in using than number romawi.Then he studied that science.At 1202, in age of 27, he write down that have been studied in book "Liber Abaci”. This book greeted by literate clan of Europe,The book advocated numeration with the digits 0–9 and place value.This numeral system disseminate to all worlds angles. |
d0d7e3df46eaee06 | Thursday, January 27, 2011
0 comment(s)
美國有三寶,麥粉,當鋪,墨外勞。 (麥當勞?)
The first paragraph of this page contains an explanation for the first line in this post, but the other two lines are whimsical paraphrases, arguably very un-PC, of my own making.
I'm watching the movie…
0 comment(s)
…but I've never known anything about the series before. An interesting tidbit from the Wikipedia entry on "The Green Hornet":
He would be accompanied by his similarly masked chauffeur/bodyguard/enforcer, who was also Reid's valet, Kato, initially described as Japanese, and by 1939 as Filipino of Japanese descent.[4] Following the Japanese attack on Pearl Harbor on December 7, 1941, references to a Japanese heritage were dropped.[5]
Specifically, in and up to 1939, in the series' opening narration, Kato was called Britt Reid's "Japanese valet" and from 1940 to '45 he was Reid's "faithful valet." However, by at least the June 1941 episode "Walkout for Profit," about 14 minutes into the episode, Reid specifically noted Kato having a Philippine origin and thus he became Reid's "Filipino valet" as of that point.[6] When the characters were used in the first of a pair of movie serials, the producers had Kato's nationality given as Korean.
Children vs. children…
0 comment(s)
In I Corinthians 7, St Paul teaches that it is better to marry and raise children than to burn with lust, whereas modern culture teaches that it is better to remain like children and burn with lust than to marry.
Granted, Jesus teaches that we must enter the Kingdom as children, but something tells me He didn't mean giant-sized children with credit lines and scrupulously well fed sex lives.
Monday, January 24, 2011
By hook or by crook…
0 comment(s)
From "When Booze Was Banned But Pot Was Not" by Jacob Sullum, Reason
Posted on January 14, 2011, Printed on January 23, 2011
Friday, January 21, 2011
An open letter to my cat…
0 comment(s)
Dear Cheetoh,
Sometimes you remind me of the Warren Zevon song "Excitable Boy". That's not a compliment. For one thing, you're not a boy. If you must keep whining, please learn to speak Spanish, Chinese, German or English, so I can undestand what you keep going on about. I might like to sleep, you know?
But… it is impressive that you like to sleep on top of my boxed set of Feynman's Lectures on Physics. You can stay.
Thursday, January 20, 2011
The cyantific maythid…
0 comment(s)
This post does not exist…
1 comment(s)
This sentence is only six words long.
This is the first seven-word sentence here.
1 comment(s)
3. there may be literally no formulable laws of neuroscience.
2. stochastic causation undermines determinism, then
The semiosis of semiosis…
0 comment(s)
"First of all," writes John Deely on page 5 of What Distinguishes Human Understanding? (South Bend, IN: St. Augustine's Press. 2002), "it is no longer possible to participate intelligently in this discussion [i.e. the question of animal cognition and human understanding] without taking account of the fact that there are qualitative differences in the communication systems of all biological species or forms." All species have species-specific modes of semiosis, so the question of "whether humans are unique" is a red herring. We might as well ask whether gold finches are unique. Clearly they are––they are gold finches, not golden retrievers. "Every cognitive organism belongs to one or another species," continues Deely, "and every cognitive species is distinguished by apprehensive modalities peculiar to itself" (loc. cit.).
I have written before about these matters, and my most sustained effort to distinguish human semiosis from 'mere' animal cognition involved a "fourfold" conception of semiosis.
The idea of a square: human semiosis differs from general animal cognition as qualititatively as squares differ from triangles, lines, and points, yet not at the exclusion of points, lines and tri-angle-arity.
Human semiosis transcendentally includes lower forms of semiosis: it manifests those forms but is not reducible to them.
While reading Deely's book I pondered a simpler way to explain the distinction: Only humans can cognize about the semiotic cognition of other species.
This thesis grants that non-human animals do cognize. Pace Descartes, animals are not just elaborate clockworks. They have emotions, desires, goals, fears, etc. They are robust semiotic cognizers.
Yet, interestingly, members of a species concern themselves with semiotic exchanges relevant only to their mutual interaction. Sparrows recognize wolf cries, and some animals can mimic the calls of other animals. But it seems that only humans make signs about the sign-making of other species. We not only manipulate our own species-intelligible signs qua signs (viz. for personal gain, social function, etc.) but also manipulate signs as signs for cognizing the signs made by other animals. Call the former β-cognition and the latter δ-cognition. I choose δ to note human semiosis in deference to Walker Percy's idea of the Delta Factor. I should also note that my notion of fourfold semiotics also stems from my reading of Percy's The Message in the Bottle.
Generative anthropology, another field of inquiry of which I only recently became aware, bears striking resemblances to Percy's thesis. According to (sigh) the Wikipedia entry,
Generative Anthropology is a field of study based on the theory that the origin of human language was a singular event and that the history of human culture is a genetic or "generative" development stemming from the development of language. …
Generative Anthropology originated with Professor Eric Gans of UCLA who developed his ideas in a series of books and articles beginning with The Origin of Language: A Formal Theory of Representation (1981). which builds on the ideas of René Girard, notably that of mimetic desire. However, in establishing the theory of Generative Anthropology, Gans departs from and goes beyond Girard's work in many ways. Generative Anthropology is therefore an independent and original way of understanding the human species, its origin, culture, history, and development. …
The central hypothesis of Generative Anthropology is that the origin of language was a singular event. Human language is radically different from animal communication systems. It possesses syntax, allowing for unlimited new combinations and content; it is symbolic, and it possesses a capacity for history. Thus it is hypothesized that the origin of language must have been a singular event, and the principle of parsimony requires that it originated only once.
Language makes possible new forms of social organization radically different from animal "pecking order" hierarchies dominated by an alpha male. Thus, the development of language allowed for a new stage in human evolution - the beginning of culture, including religion, art, desire, and the sacred. As language provides memory and history via a record of its own history, language itself can be defined via a hypothesis of its origin based on our knowledge of human culture. As with any scientific hypothesis, its value is in its ability to account for the known facts of human history and culture.
So there's yet another tract of knowledge over which I can cast the seemingly endless seeds of my ignorance.
Where was I? Right: fourfold semiosis, δ-cognition. Damn. Sorry. My 'simplified' epiphany was a lot clearer when I had it reading Deely than it looks while writing here. I'll just cite Deely again:
Reading Deely is for merather like flirting with a gorgeous woman. She's very attractive to me and I make attempts to win her over, but she is also daunting and baffling in her feminine exaltation. Likewise, though I am immensely attracted to the wisdom Deely has to offer, I often find his prose florid and stilted. This is probably due both to the influence of Peirce and Heidegger, neither writer which lacked in byzantine prose, on Deely's thought and his fluency in Latin, Greek, German, Spanish, and Italian: he writes like a Peirce born during the Renaissance.
Wednesday, January 19, 2011
Can't… "Can't"… Kant…
0 comment(s)
[UPDATE, 20 Jan 2011: I had a flash of intuition but expressed it so cryptically that now I'm trying to decrypt it for myself. I have made small revisions to the original oracle and added explanatory glosses after the three + signs.]
Because the cognitive "can't" is performatively equivalent to the metaphysical "can't", therefore determinism is false. "I can't tell you who killed your wife" because I don't know "who" he is and "I don't know who killed your wife" because "who" he is lies outside my total set of causal experience, are equivalent on determinism. Yet they are not really equivalent, therefore determinism is not a theory adequate to the real world.
+ + +
Cognitive inability: "I can't answer your question because I lack the relevant knowledge/experience."
Metaphysical inability: "I can't answer your question because there is no means by which I can discover the answer." Imagine if I were asked what the happiest man in the world on 8 March 1928 ate for breakfast.
Cognitive inability: "I can't get in the room because I don't know where the key is."
Metaphysical inability: "I can't get in the room because it is bursting with flames and I am chained to the wall."
If I say, "I can't tell you who killed your wife," my inability is due to the fact that I don't know who the killer is, not to an intrinsic lack of power on my part. If I knew who the killer is, I could tell you. My response-performance would follow from my modified cognitive state. As it happens, "I don't know who killed your wife" because who he is (i.e. anything relevant to identifying him), is outside my total set of causal experience. My cognitive deficit is based on the limitations of my causal career. On determinism, my cognitive deficit is intrinsic to me, since I cannot alter my character contrary to the total causal complex which determines me. If determinism is true, nothing is intrinsically up-to-me, but only proximately inclusive-of-me in its determined ocurrence.
Yet, clearly, my cognitive deficit is not intrinsic to me, for I could perhaps acquire the knowledge I need to answer your question. This is a problem for determinism, though, since, if my cognitive (and therefore volitional) state is totally and intrinsically dependent on my total antecedent causal complex, while it is not metaphysically impossible for my cognitive state to change, it is metaphysically impossible for me to alter my cognitive state. As such, the cognitive inability manifested in my response is performatively equivalent to a metaphysical inability intrinsic to my (wholly determined) character. The cognitive "can't" is performatively equivalent to the metaphysical "can't", therefore determinism is false. An observer would have no way of differentiating a cognitive deficit from a metaphysical inability in me, since he could only go by my performative career.
Further, since determinism stipulates that there is no autonomous "self" which can function in any way independently of an agent's encompassing causal Umwelt, it follows on determinism that there is no metaphysical principle by which the agent could determine for himself how to alter his cognitive states. The alteration of his cognitive states would not be a function of his rational agency, but rather a function––or perhaps an integral?––of the causal complex which comprises both the agent and his semiotic milieu. His cognitive and metaphysical inabilities would derive from one and the same set of causal factors. Hence, on determinism, the agent's inability would be metaphysically and cognitively equivalent. Yet these modes of inability are not really equivalent, therefore determinism is not a theory of the real world. In so far as determinism confuses, or simply elides, an important distinction in reality, determinism fails as an important theory of reality.
I'm not just conjecturing about the denial of the "self" on a materialist espousal of determinism. Consider the following excerpts from "Denying the Little God of Free Will: The Next Step for Atheists? An Open Letter to the Atheist Community" by Tom Clark, Director of the Center for Naturalism.
With a little help from added emphasis, I will let the usual contradictions and fallacies speak for themselves (yuk yuk yuk).
A few points to keep in mind are that a) speaking of what determines human behavior begs the question, b) it is a false dichotomy to treat substantial rational agency as distinct from "nature" simply because it is not mechanistic, and c) in so far as the evidence for or against naturalistic claims is neither apodeictic nor deductive, such indeterminacy reinforces the need for a choice on our part for or against those claims.
With that, have a looksee:
[T]he realization that we are not little gods has considerable benefits, both personal and social. First, by accepting and illuminating our complete causal connection to the world, a consistent naturalism leads to a compassionate understanding of human faults and virtues. Seeing that we aren’t the ultimate originators of ourselves or our behavior, we can’t take ultimate credit or blame for what we do. This reduces unwarranted self-righteousness, pride, shame, and guilt. And since we see others as fully caused…[,] we become less blaming, less punitive and more empathetic and understanding. …
Because without cause…
1 comment(s)
Because jazz exists, determinism is false. Note the use of the words "form", "developed", "substance", and "play".
Jazz is inherently indeterministic yet real, therefore reality is indeterministic.
This is an inuition akin to the role of Bach's music in coming to see the existence of God.
Tuesday, January 18, 2011
Get down to brass tacks…
0 comment(s)
From the Wikipedia entry on (mathematical) "Deterministic systems" (emphasis added by me):
In quantum mechanics, the Schrödinger equation, which describes the continuous time evolution of a system's wave function, is deterministic. However, the relationship between a system's wave function and the observable properties of the system appears to be non-deterministic.
The systems studied in chaos theory are deterministic. If the initial state were known exactly, then the future state of such a system could be predicted. However, in practice, knowledge about the future state is limited by the precision with which the initial state can be measured.
My question is, if Heisenberg uncertainty is not only a deficit based on limitations in our measurement apparati, but is also inherent to any observed quantum state, then is there any means by which "the initial state" could be "known exactly"? If not, then the determinism of chaotic systems is still only promissory, as I complained in an another post recently. The larger point that I (think I) want make (as indicated in the same prior post) is this:
If determinism is simply the doctrine that for any state S that occurs, S has an antecedent cause C sufficient to explain S, then I suppose "indeterminism" about the rational will is 'deterministic' in so far as it is the will W which is a mode of C to explain any S that arises from the action of an agent A suitable for W. But that seems to be argument by definition, not by demonstration. In other words, if the choice is between a denial of causality per se and determinism, then I affirm determinism. The issue, however, is not about causality, but about mechanism and teleology. And teleology seems to be an irreducible category of rational action not sufficiently explained by mechanistic descriptions. If "determinism" is rigged to include rational agent causation, then determinism enjoys a merely Pyrrhic victory. For by including rational agency under the head of determinism, the determinist would grant that rational agency is a legitimate form of causation. But that would be a very perverse form of determinism, historically speaking.
A further worry I have––and I use "worry" in the philosopher's sense of "an abiding academic quandry"––is how determinism is supposéd to be empirically grounded (about which I worried in the previous post cited). If a major part of the basis for determinism is the success of practical prediction, then the theoretic success of determinism rests in large part on its adequacy in prediction. (I dealt with this worry to some extent in a previous post about baseball.) If, however, prediction itself collapses at certain points, whether because of our cognitive impairments or because of inherent limitations in what is physically observable, then determinism is beset with the same faults. Meanwhile, indeterminism with respect to the deliberative will W of a rational agent A faces no parallel limitation, since the theoretical adequacy of teleological accounts of behavior maintain their value even when perfect prediction fails.
Along these lines, a commenter recently suggested that determinism, while perhaps not demonstrable––though I would go farther and say it is in principle not even utterable––, is still a good "theory" for us in order to navigate our existence in the world. The commenter likened determinism to a perfect circle: while both a circle and determinism can be defined, neither ever perfectly shows up in nature. While it might suffice to note how devastating such a concession is for determinism––namely, that the world is never perfectly deterministic!–– I would like to add two further objections.
First, positing determinism as a formal guide is exactly the worst move a determinist could make, since formal systems, of which I think the definition of geometric shapes (like a circle) is one, are, after the discovery of Gödel's incompleteness theorems, fundamentally indeterminate, or at least permanently incomplete (which is just to repeat the concession that got me to this first objection). If determinism is a formal principle, sort of like a perfect Platonic notion of Seamless Causality, and if such formal principles are not actually instantiated in the empirical world, then determinism is no more a feature of the "real world" than is a perfect circle. Both are pure abstractions, not empirically determinate realities. In addition, the very 'supernaturalness' of such formal entities suggests that Determinism qua Seamless Causality manifests cognitive access to a world beyond empirical reality, namely, the world of immaterial abstraction. Significantly, it is the traditional position of "free will" indeterminists that free will basically resides in our natural power of immaterial abstraction. In so far as intellection––i.e. the cognition of abstract reality––is in principle incommensurate with any material organ, our grasp of determinism as a formal but not empirical truth would itself ground our ability as non-deterministic agents. For if determinism is just a formal set of axioms for analyzing phenomena, then it is Gödel-indeterminate, which of course patently undermines determinism.
Second, I deny that determinism is a superior "working theory" of action, since, as I have already noted, teleological (i.e. agent-based) accounts of change are integral to human existence, and in a way that determinism cannot afford. As Professor Sandra LaFave notes in her online article about free will and determinism:
"The notion of mechanical causality applies to things but not to persons. When we account for the behavior of persons, we must use teleological explanations. …
Most philosophers nowadays acknowledge the necessity of teleological explanations of human behavior. One standard argument for teleological explanation comes from Kant.
Kant says persons are like things in the sense that physical laws apply to their bodies; the indeterminist might even admit that psychological “laws” govern some of people's consciousness events. But persons are NOT like things because they can be conscious of the operation of these laws. (A thing is just subject to laws; it is not conscious of being subject to laws.) Even the hard determinist must admit this odd characteristic of persons. People can thus be aware of physical and psychological laws as observers, from the outside.These laws are viewed as things that can operate on me, but there is always a sense in which I view myself as apart from them — for example, right now, when I am reflecting about them.
When I think about how to behave, I consider reasons. I never think about causes, because insofar as I am an agent, they are never relevant. I have to make choices, and I choose on the basis of reasons. In other words, the model of physical causation does not fit at all when you try to apply it to human choices. Even if all human choices were determined, the HD [i.e. hard-determinist] model would still be completely inadequate to describe the perspective of the agent, which is what really matters for morality. The HD position is simply at odds with human experience because it continually asserts that as far as human experience is concerned, things are not what they seem.
I should add that LaFave, in the latter paragraph, is not saying that determinism is wrong because it is counter-intuitive, or because it flies in the face of "everyday experience". If that were all we needed to refute someone, Einstein would be a madman, not a genius. Rather, LaFave's objection cuts much deeper, because she says HD does not account for all the data (as scientists like to say). Here's a syllogism, reminiscent of my recent syllogistic disproof of the claim that Darwinian natural selection explains human nature (cf. infra), to capture LaFave's worry:
1. Human action is an irreducible metaphysical category in the real world.
1a. If not, the action of human advocates of determinism is an accidental metaphysical category and not a genuine feature of real-world cognition. In other words, the noises determinists make aboue the truth of determinism is as superficial to a rational grasp of the world as their burps are.
2. Human action is meaningful only in teleogical terms.
2a. Even if we speak only of desires, we still must speak of them as more than bare causal states, to speak nothing of indeterminately complex desire-matrices.
2b. If perceptual desires are reducible to and identical with "bare causal states", then even falling rocks and rising flames 'desire'––which would be an astounding concession for the determinist to make to Aristotle after all these centuries!
3. Determinism does not have any theoretical 'space' for teleological action: it excludes teleogical agent-causation.
4. Therefore, determinism does not provide a metaphysically satisfactory account of the world, which of course includes der menschliche Umwelt of perceptual reasoning.
5. Therefore, determinism is false in the real world.
The point is that even if we espoused determinism merely as a "working theory", or as a "functional model", of the otherwise 'unknowable' world, it would not suffice as a working theory of what we encounter all the time, namely, our own rational agency as an irreducible category of making sense of our own existence. Indeterminism is therefore superior to determinism even in merely pragmatic terms.
In any event, another worry I have is why, on determinism, specific cases of action could not be subsumed to vintage scientific laws themselves. For if every instance of a falling apple 'perfectly' manifests (though, of course, it is hardly formally perfect), then why could not an instance of my choosing to buy one book rather than another be an instance of a natural law in its own right? Presumably, if in an otherwise gravity-free universe––a universe occupied by only one microscopic object––an apple were introduced near that molecule for an hour, the universe would, for one hour, 'have' the law of gravity. Interestingly, in the entire history of that cosmos, the law of gravity would have applied only for an hour. It would be a transient, particular event, not an overarching principle of natural action.
Presumably, as well, "the laws of physics as we know them" (how's that for a pleonasm!) did not hold prior to a certain point (i.e. the Planck epoch) in the inflation of the primeval cosmos. Did, then, other laws hold, or no laws at all? If they were laws of nature, it seems odd that they could simply fail to hold, and after a mere few nanoseconds, to boot. If they were not abiding laws of nature, however, then the cosmos began and was inflated by no knowable laws. As such, the cosmos could not be accounted for by reference to its own immanent laws, but would indicate the radical contingency of the universe. Further, if the primal conditions were only "transiently nomic", why do we presume our cosmos operates under "laws" now? Whether they last a couple nanoseconds or a few billion years, if laws of nature are incidental to the actual operation of nature, then why call them laws of nature?
The upshot as far as rational agency is concerned is that, if we can stipulate laws that apply to something as mercurial, inscrutable, unrepeatable, and transient as the birth of the cosmos, could we not also stipulate laws of action, which, while not 'violating' other principles of natural generation, included as their truthmakers the immanent action of the very agent being described by physical law? If a law could apply in the cosmos for an hour, or even just for a few nanoseconds, and then be 'sublimated' by passage into a different state of affairs (SoA), subject to distinct causal parameters, then presumably a law could exist at the junction of an agent A's deliberative choice A(c) and other laws which grounds the means for A(c).
A PSA from Elliam…
0 comment(s)
Elliot informs me comments have not been showing up in the comboxes, though they do reach him by email.
My suggestion to anyone who wants to leave comments is either
a) to write short comments around 500 words each, which should go through without a hitch, or
b) to post your comment and, if you see an error message, just go "Back" on the window to refresh it. The comment should be there.
The error message seems to be as much a lie as the cake!
That is all. As you were.
You have to laugh…
0 comment(s)
I did.
Besides, laughter is better than getting all wee wee'd up!
"Sarah Palin Defends 'Refudiate'."
Is philosophy a waist of time?
0 comment(s)
Dr. Stephen Hicks provides a regular florilegium of highly amusing "insights" about philosophy from his undergrad students over the years. A choice excerpt:
The existence of God is questionable since evil does have some good points to make. The greatest gift is to be in God’s presents, but when we are in God’s presents we should not think about ourselves. John Hick rebukes the concept that God would not allow suffering if he existed in the third paragraph of his essay. Because of evil there is said to be another force in the universe—a dark force. His name is Satin.
Viel Spaß beim Lesen!
The latest from…
0 comment(s)
…BBEDU, my weight training blog.
My "Squats and Milk" regimen is going well.
No injuries lately to speak of.
Sunday, January 16, 2011
Truthmakers, Mythmakers, Falsifiers, Fairies…
1 comment(s)
The Bode-Titius Law (BTL), which I cited in a recent post on Darwinism and morality, has been percolating in my mind since I encountered it in Stanley Jaki's The Relevance of Physics. It is a fascinating case study of the falsifiability of scientific claims, as well as of the question of how much heuristic 'slack' a theory should be given vis-á-vis empirical inconsistencies. Lately, the specific problem that has vexed my brain is this:
The falsifier of the BTL was the existence of Neptune, once discovered.
Yet, the (unrecognized) existence of Neptune was a truthmaking condition for the validity of the BTL.
Therefore, the existence of Neptune was both a truthmaker for and falsifier of the BTL.
(I'm not the only one who hears "B.L.T." whenever I read "BTL", am I?)
Images I have: Johann Elert Bode, with full confidence in its truth, successfully using the BTL to explain astronomical phenomena at the cutting edge of the science in his day. The planet Neptune, unknown, orbiting the sun in space, influencing the very motions of the planets being described by the BTL. Passage of time. The discovery of Neptune. The planet Neptune, now recognized, orbiting the sun in space, still influencing the motions of other planets in ways yet to be described by a later law.
If at some point humanity lost awareness of Neptune but somehow managed to salvage the BTL, would the BTL be true again? Was it ever true? Is any scientific equation ever true? Should we even speak of the Bode-Titius Law as a natural phenomenon if it could so easily be "repealed" by human cognition? If a scientific law can apply to the system called "the Milky Way", can another law not just as plausibly apply to a system called "the Milky Way without reference to this and that planet"? If laws can apply when restricted to a certain range of material objects, can they not also apply when restricted to a range of temporal units (e.g. could we not construct a "law of nature" which explains why I ordered a bacon waffle sandwich at 15:30 this afternoon but makes no mention of why anyone else ordered something nor of what I did before or after that time)?
I am, once more, a very diffident scientific realist.
Lo! Strange signs in yon ever-turning sky…
1 comment(s)
"Unser Handeln sei getragen von dem stets lebendigen Bewußtsein, daß die Menschen in ihrem Denken, Fühlen und Tun nicht frei sind, sondern ebenso kausal gebunden wie die Gestirne in ihren Bewegungen."
–– aus Einstein sagt. Alice Calaprice (Hrsg.), München/Zürich: Piper, 1997, S. 177.
["Our behavior is borne by the always living consciousness that humans are not free in their thinking, feeling and doing, but rather are as causally bound as the stars in their motions."]
Perhaps you've heard of the recent "identity crisis" sweeping the astrological world. As Andrea Reiher reports (13 Jan 2011,
Astronomer Parke Kunkle tells NBC news that due to the Earth's changing alignment in the last 3000 years, the sign you are born into now are different than they were long ago. Plus, astronomers believe there is a 13th Zodiac sign called Ophiuchus, which falls between Scorpio and Sagittarius.
The constellation of Ophiuchus is located near the celestial equator and is typically depicted as a man wrangling a serpent. "Ophiuchus" means "serpent-bearer" in Greek. …
… Ophiuchus can be found in the Sidereal Zodiac, which is used by Jyotish (or Hindu) astrologers. The Sidereal Zodiac's astrological sign dates are … based on a moving Zodiac, not the fixed one we use today in Western astrology. Therefore, that Zodiac has shifted almost one full sign from the fixed zodiac.
The astrologically inclined commenters are generally of one mind: "No way, I'm SOME ZODIAC SIGN through and through." Indeed, in the original article that Reiher cites (, 12 Jan 2011), we read:
"So I'm an Aries now, fabulous," said Jozsef Szathmary, reacting to the news.
Szathmary has gone from Taurus to Aries in stride, vowing to his sun alignment that he is ready to change.
"I'm a ball of sunshine....wherever the sun is at, that's where I'm at," he said.
And that is just the point. The Sun and Earth are moving surely and slowly, so the stars of your sign aren't the same as they were when that sign was assigned to your birth thousands of years ago.
Is this not an incredible instance of falsification in what otherwise claims to be a science, and an equally stunning case of cognitive dissonance and dogmatism by astrological adherents? Actually, I think astrology's scientific merit should be demonstrable or refutable on grounds prior to this "discovery". It is a philosophical puzzle which has been tickling my mind for days, since the question for Western astrologers now is whether a person's "fate" was decided by their birth star by the immutable (fated!) arrangement of the stars, or whether a person's fated "character" has actually changed.
Let us see what astrology's resources might be for salvaging itself. On the one hand, astrologers might say a person born a Cancer (like myself) still is a Cancer, and his fated character just included from all time the ordeal of having to go by a different astrological moniker. For instance, if I believed in astrology––which I don't––, I could view myself as a Cancer fated to live the rest of my life under the guidance of Gemini. In other words, I would lead heed a Gemini's guidance in a Cancer-way, much like an immigrant might live "the American way" in an unmistakably Greek way. His Greekness includes in it the capacity for living as an American. In this way, there is simply a new disclosure of the previously unapprehended dimensions of each Zodiac sign. Astrologists, therefore, might salvage their practice by saying the signs functionally overlap in certain ways. They might say that each sign has relatively disparate degrees of a common set of characteristics or tendencies shared by all the signs.
This tactic, however, seems undesirable, not only because it seems very ad hoc, but also because it seems to explode the notion of fate, which is central to astrology. Let us imagine I was born on 9 July 1979 and it was within the means of astrology to foretell that my fate would be to die as a mighty general in a foreign country. Had I been born two weeks earlier, however, as a Gemini, it would have been my astrological fate to die as a reviled political dissident in my own country. If my fate really is the former, then there can be no changing it––that's what fate means! If, though, I really do get 'demoted' to the fate of a Gemini, then I have no reason to heed either of my fates, since clearly neither of them––indeed, no fate at all––is unalterable.
An alternative tactic would be something like Mr Szathmary's, cited above: just accept your new "fate", based as it is on the ancient patterns of the heavens, and admit that our humble astrology merely tries to "model" the elusive reality. The shifting phenomena of the astrological charts would not disprove the truth of astrology, only loosen the pragmatic fit between the actual deterministic course of the heavens and astrology's usefulness as a tool for mapping that reality. Interestingly, in so far as normal science these days has generally settled for generating appealing "models" of the world (cf. e.g. Hawking and Mlodinow in The Grand Design), which never actually capture the "truth" of the "real world", then astrology has a legitimate seat at the table of scientific modeling. Its comparative inaccuracies are a function of the vast complexity of its object (viz. the entire cosmos and human behavior), not of its falsity per se. In contrast, the superior accuracy of other "hard sciences" is merely due to their comparatively narrower field of inquiry. Meteorology is actually more complex than astrophysics, and neuroscience probably more complex than both of them, but this does not mean we scorn weather science just because it is less pragmatically successful than astrophysics, nor that we reject brain science just because it is arguably still beset by semi-scientific illusions.
We can see hints of both tactics in another article by Reiher (, 13 Jan 2011), one intended to calm the troubled hearts of the astrologically inclined:
Astrological signs are based off the position of the sun relative to the Zodiac constellations on the day you are born. The problem is that the positions were determined thousands of years ago and they have since changed due to the precession, or Earth's "wobble." It would mean horoscope signs as determined by the constellation positions are now nearly a month off. …
The "new" dates are not news -- to astronomers, which are not the same as astrologers. Astronomers include the 13th Zodiac sign, Ophiuchus, which some theorize was discarded by the ancient Babylonians because they wanted 12 signs and not 13. …
Which is just an example of how fickle astrology is.
What are the characteristics of an Ophiuchus? Nobody knows, because astronomers don't assign characteristics based on where the sun was when you were born. And most Western astrologers don't count Ophiuchus. Certainly astrologers could choose to include Ophiuchus and would then have to assign its bearers characteristics and give it an element -- we would guess a Water sign, as Scorpio is one and they are assigned in triangles across the sky….
There has also been talk that this "new" change only affects those born after 2009. That's not right either. Ophiuchus was discovered just as long ago as the other Zodiac constellations and the Earth's shift on its axis has been happening and will continue to happen forever. In another 3000 years, signs will have shifted again and, for instance with ourselves, all Libras will be Leos.
But astrology doesn't work the same way as astronomy and if Western astrology is something you believe in, you're fine. Nothing has changed. So don't panic, horoscope fans. You can stay just the way you are. But it's an interesting concept to ponder.
I admit I don't follow how Reiher, regardless how much she may ponder, can admit both that the astrological signs are constantly shifting and that nothing has changed for astrology. I think her point is that "normal astrology" is not so much about your "fate" from birth as it is about following the relation between the sun and the Zodiac constellations on a regular basis. But how could you know what relation to track unless you arleady know your "birth sign" and unless it didn't change? Perhaps normal astrology is just about noting how the sun looks and predicting your mood for the day, or noting how the constellations look and predicting which wine you will have after dinner. In that case, the Weather Channel is my astrologer!
How fickle astrology is, indeed.
In any event, the foregoing sketches another reason why I reject determinism, actually: determinism is just promissory astrology. For, according to both astrology and determinism, my origin, character, and destiny are all unchangeably fixed by the primal conditions of everything in the cosmos. For both (hyper-)astrologers and determinists, we are literally just puppets of the motion of the sidereal universe. The problem, though, is that only astrology tries to make meaningful, coherent, specific prophecies from its deterministic presuppositions. "Ordinary determinists", by contrast, admit the world is too complex (so far?) for us to predict someone's fate by correlating it with heavenly phenomena. Clearly, though, the predictions of astrology are wrong, not only as a matter of experience but also in terms of the mutability of the astrological signs. Astrology could be true only if determinism were true. The more specific astrological predictions become, however, the weaker their fatalistic impact becomes, since, as we see, the signs will shift every n years and a person will be indeterminately subject to contradictory fates (fates, mind you, which aren't even "fated for all time").
Now, suppose astrologists, wanting to preserve determinism, lowered their standards for what an astrological prediction is. Then determinism would suffer the same fate as astrology at the hands of its critics. For by making a person's "determined" course of existence so vague that it can include numerous contradictory predictions, deterministic "foresight" is either vacuous––like the cleverly generic "horoscopes" which could be applied to almost anyone at various points in life–– or would be an admittedly unfalsifiable doctrine. For if, due to the complexity of the world, no prediction can be made in principle which is specific enough to demonstrate or refute determinism. Only if determinists dared to make predictions along the lines of plain 'ol horoscopes, would we have any means of testing the predictive usefulness of determinism. Without specifying what the causal mechanism is which grounds the deterministic links between "the entire universe" and "my personal fate", determinism is theoretically vacuous––mere promissory astrology. "We can't articulate what your fate is, but we are certain your destiny is fated: we're sure of determinism, but we can't make precise predictions from it: the world's just too complex." For if science were ever so complete that we could predict a person's behavior, inclinations, choices, and destiny from a reading of the stars at the time of her birth––which is to say, from looking at the largest causal matrix which has bearing on that person's actual existence––, then astrology would be the highest form of science. Here's a troubling syllogism, though:
1. If determinism is true, then astrology is true.
2. Astrology is false.
3. Therefore, determinism is false.
Consider another syllogism:
1. If the theory of special relativity is true, then the speed of light in a vacuum is constant.
2. The speed of light is not constant. (Assuming a future experiment shows this.)
3. Therefore, the special theory of relativity is false.
Yet another syllogism:
1. If Christianity is true, then the doctrine of the hypostatic union is true.
2. The doctrine of the hypostatic union is false. (Assuming a future argument demonstrates this.)
3. Therefore, Christianity is false.
In each case, the falsity of the entailments of a theory entails the falsity of the theory. Therefore, only if a determinist is prepared to defend the plausibility of astrology, should he be prepared to embrace all the entailments of determinism.
The usual rejoinder is that our failure to compute exact predictions is merely a limitation of our cognitive abilities, not a disproof of determinism… which sounds an awful lot like what an astrologist says. Here's a fundamental problem, though: if the basis for believing in determinism is the cognitively accessible "scientific evidence" for it, then the basis for determinism is a function of our cognitive abilities as scientific cognizers. In other words, if determinism is "scientifically demonstrable", then the cognitive access to scientific demonstrations of determinism is coterminous with any cognitive basis for determinism. Unfortunately, though, as soon as the determinist admits that exact science may never be able to compute completely rich predictions to satisfy the skeptic, he eo ipso undermines the reliability of our heretofore scientific proof for determinism. The consistent accuracy of scientific predictions in various domains is a function of our computational success: scientific predictions, in other words, are only worth the predictions on which they're printed. As such, the scientific evidence for determinism extends no farther than the success of predictions which bear it out. By admitting that we may in principle be unable to make predictions beyond a certain computational point, however, the determinist is admitting that determinism in principle may not extend beyond the computations we actually make. That is a "local" fact about us, though not a truth about the world as a whole––at least, not a "scientific truth."
The solution at this point would be to include a substantial ceteris paribus clause that, say, "the laws of nature which underwrite our scientific predictions apply in all cases at all times in the universe and will never change." To add such a clause, however, not only preempts the much vaunted revisability and falsifiability of exact science; but also, far from demonstrating it, merely assert determinism: "The world is the way it is from all time and for all time." If we believe in determinism because exact science meshes with it, yet also admit that the complexity of the world outstrips our ability to get at its true nature, then the "scientific validity" of determinism may just be as much the fault of our idiosyncratic cognitive limits as the falstity of astrology is the fault of the charts' phenomenological limitations.
Songs I use to clean my room…
0 comment(s)
As in, which I use to give me that allmächtigen Zimmerreinmachendengeist!
Basically… anything instrumental, and nearly anything at all, by Fugazi!
Friday, January 14, 2011
Philosophy by metabolism again…
2 comment(s)
From "Darwin's Rape Whistle" by Jesse Bering (, 13 Jan. 2011):
I want to rework this paragraph to see what might fall out:
Investigators studying honesty through an evolutionary lens, take great pains to point out that "adaptive" does not mean "vicious," but rather only mechanistically viable. Yet dilettante followers may still be inclined to detect a naivete in these investigations that simply is not there. As University of Burpelson psychologist Manfried Rawhide and his colleagues write in a 2079 piece for the Review of Major Pneumatology, "No sensible person would argue that a scientist researching the causes of cancer is thereby justifying or promoting cancer. Yet some people argue that investigating honesty from an evolutionary perspective condemns or undermines honesty."
The second paragraph exemplifies a rebuttal of Bulverism. Bulverism is the tactic of assuming some persons are wrong based on physiological and psychological––or, in this case, evolutionary––factors which dictate their rational biases. We may "believe in" honesty as a fundamental "moral" principle, the Bulverist argues, but this is only because we have been shaped by our evolutionary past to be so biased. Therefore, the preference for honesty, under the aegis of "morality", is just atavistic naivete, which ought to be supplanted by a truly rational ethics that is cognizant of the autonomy we know have over our own natural selection. Beren casts his vote against the Bulverists thus:
Beren's point is that, just because the rape instinct is strong in numerous males, does not mean rape is therefore morally acceptable. The implication of his article, however, points in an obverse direction, namely, that because rape is bad, though natural selection has kept it going, the equally naturally selected measures of the female body against rape are a kind of good. It is interesting to note how Darwinian ethics is essentially Kantian in so far as the former rejects behavior which, if applied on a species-wide level, would lead to the degradation and dissolution of prior reproductive success. I will call this Darwikantian ethics. Kant, under the rubric of the "categorical imperative", argued that we should do only that which we believe could be practiced by everyone at all times, and abstain from that which we realize could not be practiced by all people at all times. As he writes in Grounding for the Metaphysics of Morals (tr. James W. Ellington. 3rd ed. Hackett. p. 30 ( [1785] (1993)): "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." Lying, for instance, is unacceptable because, if everyone did it––i.e. if it became literally universally acceptable––, our entire means of communication and cooperation would collapse. Likewise, Darwinian ethics rejects selfishness on the grounds that widespread selfishness––i.e. as a 'universal' feature of human behavior––would undermine ultimate reproductive stability. And "it would be bad," or at least as "bad" as a Darwinian is allowed to say something is in 'purely' adaptive terms.
Aye, there's the rub. If the basis for defending, say, altruism is that altruism has generally promoted reproductive success in the past, then we can take it as a general ethical principle that that which is morally defensible is morally defensible because it promotes reproductive success. On this principle, however, what basis do we have for condemning rape in every case? Presumably, again, the saving principle is the Darwikantian categorical imperative (DCI), but this is a feeble moral guide for at least two reasons. First, how would we define the rapist's principle for action? Does he believe it would be a universal law that every man should rape every woman under any circumstances? Certainly not, since he would certainly defend his mother and sister and other favored females against male aggressors. His principle may, therefore, be so nuanced that it could be a universal basis for action, say, "Rape a woman only when the coast is clear, you have already sired at least another child, she does not appear to be pregnant, etc." If the conditions for the action were so specific that, even if universally accepted, they would come together only rarely, and therefore would not undermine the collective reproductive success of the species, it's hard to see how the DCI could coherently reject it. Further, if the rapist used a prophylactic so that pregnancy and its burdens on the woman were not an issue, he'd seem to be that much less immoral. But surely such moral reasoning is amiss.
A second problem with the DCI is that it cuts both ways. For, if an action cannot be "morally" endorsed unless it could be applied universally for the species, then altruism seems to be morally unacceptable. No species could survive if all its members all the time acted altruistically, since, if they literally never acted for their own interests, they would become paralyzed by inaction, like Buridan's ass, and probably starve to death. More realistically, if it were only the case that nearly everyone always acted altruistically (as we are, in fact, expected to make the case!), the altruists would eventually be overtaken by the minority of "deviants" acting selfishly. The point is that if the DCI proscribes actions that would have universally negative results, then altruism is morally proscribed by the DCI. As soon as the proponent of the DCI admits there must be some 'intermediate' principle between sheer relativism and DCI-absolutism, however, she is back in the folds of traditional moral argumentation and Darwikantian ethics offers little, if any, light in the discussion.
The institution of marriage, for instance, is seen as a good in Darwikantianism because it enhances social stability and thereby promotes reproductive success. This does not, however, mean everyone can or must get married, which shows once more that there is some other domain of moral wisdom by which otherwise "natural" behaviors are deemed justifiable and not merely "mechanistically viable." If marriage is wrong in some cases, presumably because a DCI-style universalization of such cases would undermine reproductive success, then it's hard to see why rape would not be right in some cases (say, as a form of cathartic vengeance which restores the social order by taking one male down a peg by the symbolic attack of his daughter or wife). That kind of socially beneficial "ritual rape" could be applied universally, since it would only apply in certain circumstances. But again, surely such moral reasoning is flawed.
The notion of a universally applicable specific law is not incoherent; indeed, it is highly common in science. The Bode-Titius Law, for instance, is universally valid if taken in conjunction with limiting conditions (e.g. the absence of Neptune). Indeed, the whole of Newtonian physics is still scientifically, universally "true", even though it is theoretically false, when qualified thus and such. Likewise, quantum mechanics is technically deterministic according to the universal validity of the Schrödinger equations, though it is universally indeterminate in every specific case. Paradoxical, perhaps, but true. So, while rape––and altruism––would be universally unacceptable, specific cases of rape, and specific cases of altruism, would be acceptable in Darwikantianism as long as they are qualified in their particular applications.
Yet, we all know that rape is intrinsically wrong, not merely generally undesirable. How do we know this, though? Not by a vague nod to natural selection, but rather by an awareness of the intrinsic principles of right human conduct. There seems to be an important difference between universally and absolutely true (i.e. between always potentially and intrinsically valid). I will not explore that difference now, mainly because I still must ponder it, but I want to close with a syllogism that captures the point of this post.
1. Humans are intrinsically moral agents.
2. Moral action is not intrinsically derived from natural selection.
3. Therefore, the nature of humans is neither intrinsically nor exhaustively based on natural selection.
Because we can decided to be better than our instincts, we are better than the basis for our instincts.
Thursday, January 13, 2011
Brethren, fall not back into thyselves…
0 comment(s)
Hebrews 3:
[14] For we share in Christ, if only we hold our first confidence firm to the end,
12 videte fratres ne forte sit in aliquo vestrum cor malum incredulitatis discedendi a Deo vivo
13 sed adhortamini vosmet ipsos per singulos dies donec hodie cognominatur ut non obduretur quis ex vobis fallacia peccati
14 participes enim Christi effecti sumus si tamen initium substantiae usque ad finem firmum retineamus
12 Gebt Acht, Brüder, dass keiner von euch ein böses, ungläubiges Herz hat, dass keiner vom lebendigen Gott abfällt,
13 sondern ermahnt einander jeden Tag, solange es noch heißt: Heute, damit niemand von euch durch den Betrug der Sünde verhärtet wird;
14 denn an Christus haben wir nur Anteil, wenn wir bis zum Ende an der Zuversicht festhalten, die wir am Anfang hatten.
12 弟兄們!你們要小心,免得你們中有人起背信的惡心,背離生活的天主;
13 反之,只要還有“今天”在,你們要天天互相勸勉,免得你們有人因罪惡的誘惑而硬了心,
14 因為我們已成了有分於基督的人,只要我們保存着起初懷有的信心,
We are called to faith in the God whose splendour blinds the eye of man as the sun blinds the eye of an owl. With that faith in the unseen reality behind all passing realities that crowd our vision, we enter the family of the Church. Washed in the waters of Baptism, Christ's own Death and Resurrection, we are given a new challenge: seeing our true selves in Christ beneath the frail scaffolding of our fallen nature. We are to treat ourselves as clay vessels in which treasure is hidden and likewise to treat the Sacraments as vessels of various textures in which the same treasure is hidden in a boundless way. Only by holding to the substance of the Eucharist, hidden beneath the accidents, can we rightly partake of the Holy Gifts. Likewise, only by holding to the substantial God-likeness of all humans, beneath the accidents of their birth and idolatrous confusions, can we live right among men. In the same vein, only by holding to the substance of a unified, coherent nature, beneath the accidents of empirical error and statistical uncertainty, can we attain a scientific grasp of the world. Thus, all reality is sacramental, all reality a symphony of dim reflections towards Eucharistic fullness.
Saved from God or by God?
0 comment(s)
From A Thinking Reed:
1 comment(s)
I just guffawed for about three minutes non-stop reading the reviews of this $6,400 Kindle book at Amazon. Brace yourself.
HT to Brandon.
0 comment(s)
Summa contra gentiles Sancti Thomae Aquinatis - Glosses from Jazzland: SCG, Book I, Chapter 33: "Chapter 33: THAT NOT ALL NAMES ARE SAID OF GOD AND CREATURES IN A PURELY EQUIVOCAL WAY [Caput Triginta Tres: Quod non omnia nomina dicuntur ..."
I laugh, therefore…
0 comment(s)
Brandon writes:
This is a repost from 2006. The second one, of course, is the most famous Descartes joke ever.
Bach's music is considered a proof for the existence of God…
0 comment(s)
The accordion is to European instruments what German is to European languages. Only the few realize how beautiful both can be, despite plebeian biases to the contrary.
Cf. this post for a similar cogitation on indeterminism and jazz.
Tuesday, January 11, 2011
Tell yourself now what you will tell yourself next…
7 comment(s)
Let us imagine a man, Larry, is arguing for strict (albeit compatibilist) determinism. His opponent, Mary, points out how this makes it a scientific necessity, in the only world we actually know, not only that humankind has arisen, but also that each of us in this discussion has arisen. As such, on a determinist reading, given the initial conditions of the world as disclosed by science, each of us was personally predestined and the world as we know it was foreordained from eternity. "So much," Mary asks, "the separation of science and theology, right?"
I don't accept strict determinism, but I also don't think the strong anthropic principle (SAP) cuts much ice here. The SAP is often deployed to deflate the shock people feel that we exist in an otherwise lifeless, hostile cosmos. We only notice the delicate balance of features which make the emergence of sentient beings like ourselves suitable just because the cosmos has that balance! If it lacked that balance, we 'd not be here to marvel. As I say, though, I don't think the SAP cuts much ice against Mary's observation, since the only empirically grounded basis for initial conditions we have is the world we in fact inhabit. As such, determinism, coupled with our existence, means the world is necessarily personally sentient, and that sounds a lot like ID, or something more. The tortuous spread of time in which it took for us to 'get here' is just an anthropocentric illusion: the universe was 'getting to' our level (and perhaps beyond) from the very beginning.
Indeed, on strict determinism, the present is but a holographic function of the very first instant, and thus the first instant contained within itself, as in the mind of God, all subsequent potentialities of its own immanent necessity. Only if we have some empirical basis for saying there could have been a "different world altogether" (i.e. absolutely different initial conditions) could we say we might not have existed in it. But then, if the actual world's initial conditions could have been different, then the actual world (i.e. its most basic set of conditions for being) is radically contingent. What, then, of strict and total determinism? Is not the multiverse a mad grab for contingency by otherwise narrow-mindedly deterministic folks. Look at David Lewis' modal hyperactualism: physicalist Platonism.
Even if Larry grants there is epistemic indeterminacy, due to our ignorance of an otherwise complete determinism, he has no coherent basis for distinguishing between our epistemic uncertainty (U:e) and the ontological necessity of the world (N:w). "When you don't know every input," Larry concedes, "you can represents the range of results probabalisically." Yet, he maintains, "that is stochasticism via ignorance, as opposed to inherent stochasticism." The problem is that, since N:w causes U:e without any metaphysical 'slack' (on determinism), if the latter is genuinely stochastic, and yet is genuinely continuous with the underlying physical world, then the physical world itself generates genuine stochasticty (i.e. in us, if nowhere else).
Further, we should make Larry familiar with D. M. MacKay's arguments about the inherent unpredictability of self-knowledge. Briefly stated, MacKay argues that even if we at time t knew every possible 'input' about ourselves at time t+1, we could not predict our action at time t+n, since at t+1 our knowledge that we will certainly do A or not-A would recursively influence our total epistemic state at t+1 and force us to recalculate what we-at-t+1-with-A-certainty versus we-at-t+1-with-not-A-certainty would do. Interestingly, no one, not even an omniscient calculator (OC) who knew as much about us possible would be able to assert a prediction of our action (OC[A]), since, first, OC[A] would be a certainty only if OC told us our action and we in fact bore its truth out (but then the recursive self-prediction problems arise), and, second, the truthmaker of OC[A] (even if OC whispered it in secret) would be true only when we in fact did A. Since there is always a logical possibility I will do not-A, OC's prediction depends on my doing A, not vice versa. Indeed, OC[A] is not a scientific prediction if it is not in principle falsifiable. As such, once again, we see how science is inherently non-deterministic.
Now, to be more precise, the distinction between my two points about OC's prediction is this: The first point means that OC[A] would support determinism iff OC's assertion of OC[A] to me could not even logically influence me to do not-A. However, once I know OC[A], I know that I know OC[A] and my knowledge of OC[A] (k:OC[A]) becomes a new factor which OC factor in his prediction of my action. So in order for OC to prove to me that I am subject to complete determinism, he must announce his prediction so I may witness how I invariably comply with it. Once he tells me my future, however, he is logically one step behind the epistemic state k:OC[A] to which OC[A] applies. OC must then recalculate OC[*] to include what OC[A] did not, namely k:OC[A]. The upshot is that no one can ever prove to me that I am a deterministic system.
Second, if determinism is true and could be mapped onto a complete knowledge of the world, there would be no logical space for "prediction". Prediction is an assertion about a state of affairs (SOA) which will arise with a certain probability. A necessary effect cannot be anymore predicted than one can "predict" the sum of 2 and 2 or that with which X is identical. The combination of determinism and Laplacian omniscience removes the secondary causal efficacy of anything O being examined, since "what O does" at time t is nothing more than what "the world prior to t" is. On Laplacian determinism, distinct entities are just illusory epiphenomena of the encompassing total causal SOA. As such, there is nothing to predict "beyond" SOA at t (SOA(t)), only descriptions to be made of SOA(t), since a complete account of SOA(t) will be symmetrically identical with and logically inclusive of SOA at any time (SOA(*)). If that were true, though, there would be no "me" about whom to make predictions. Only if I contribute something ontologically distinct to the SOA can predictions be made about my effects. In which case, however, no prediction is justified by a reference to SOA prior to my effects, but rather all such predictions hinge on the actuality of my effects. The truthmaker, therefore, for OC[A] is not SOA at time t(OC[A]), since an assertion about that SOA would predate (viz. not include) the occurence of A. Therefore, any putative OC[A] would have SOA:t(OC[A])-1, not SOA:t(OC[A]), as its truthmaker.
I have written about this problem before in a post titled "Reporting live". The gist of that post was this:
Determinism entails that there is such a "report" on all things at every instant, since at every instant the world necessarily is the way it is without an indeterminate remainder. … However, if the way the world is necessarily entails a report on all things, then the way the world is at any instant would have to include a report about the way of the world just subsequent to the report's existence. … We can easily spin this inherent indeterminacy to infinity, but in that case we have a W which is indeterminately true not in just two ways, but in an infinitude of possible states of affairs. Consequence for determinism? A state of affairs comprised of an infinite number of possible states of affairs is indeterminate in potentially infinite ways. So determinism is false in the actual world. It can't be a determinately true R in W that R(W(x,y...n)), since R(W(x,y...n) would have to include itself as a determinate truth in W(x,y...n), whereupon W(x,y...n) is no longer determinately and singularly W(x,y...n), but is W(R(W(x,y...n)))."
On top of all this, there is the matter of the inherent physical underdetermination of theoretical explanation. Briefly, since any physical SOA can be subsumed to innumerable competing formal explanations, no physical SOA perfectly and exclusively exemplifies a single, determinate formal operation. On the other hand, we know we perform determinate formal operations. Ergo, we know we are not merely physical SOAs. The physical is formally indeterminate, but formal truth is not. As such, formal truth is not purely physical and there is no basis for total physical determinism. I have, of course, written about these determinate vs. indeterminate lines of reasoning at length before. I will add all this into the book I am writing about determinism and free will, Deo volente.
Monday, January 10, 2011
Christmas is for Christ! 2.0
0 comment(s)
Continuing with points I made in an earlier post
Christmas Was Never a Pagan Holiday
Marian T. Horvat, Ph.D.
Second, this claim is based on unsound assumptions. … Emperor Aurelian inaugurated the festival of the Birth of the Unconquered Sun trying to give new life – a rebirth – to a dying Roman Empire. It is much more likely … that the Emperor’s action was a response to the growing popularity and strength of the Catholic religion, which was celebrating Christ’s birth on December 25, rather than the other way around. (3) …
Posted on December 15, 2010
Of related interest is the recent disclosure of (colorized) photographs of a "Nazi Christmas". The story is interesting in its own right as historical candy. It is, however, especially pertinent for this post because it provides a vivid, recent instance of the same kind of pagan revisionism of which Dr. Horvat writes in reference to Emperor Aurelian's attempts to supplant Christmas for caesaropagan gains.
Hitler's Christmas party: Rare photographs capture leading Nazis celebrating in 1941
By Allan Hall, Daily Mail Online, 24 Dec. 2010
…the Nazi Christmas was far from traditional.
Out of sight at the top of the tree behind Hitler was a swastika instead of an angel, and many of the baubles carried runic symbols and iron cross motifs. The remarkable pictures were captured by Hugo Jaeger, one of the F[ü]hrer’s personal photographers.
He buried the images in glass jars on the outskirts of Munich towards the end of the war, fearing that they would be taken away from him.
Later he sold them to Life Magazine in America which published many of them this week. …
In 1944-1945, the Nazis tried to reinvent Christmas once again as a day to commemorate the dead, in particular fallen soldiers – by that time Germany had lost almost four million men in the war.
While there is no denying Hitler made use of Christian imagery and rhetoric [LINK]––a Germanic tradition of "militarizing" the Gospels which goes back to the early Middle Ages, as I discussed in my bachelor's honor's thesis (with much insight from James C. Russell's scintillating book)––, the point is that the dominant basis for his political vision was Aryan occult theosophy [LINK1, LINK2, LINK3, LINK4]. Indeed, there is a great deal of evidence that Hitler had much Nietzschean disdain for Christianity [LINK1, LINK2]. Hence, to say that "Catholic was a [good] Catholic" is a canard that should be put to rest. Claiming that he was an Aryan pagan may also be debatable (just as claiming Nietzsche was a proto-Nazi is irresponsible), but the point is that Hitler was hardly a consistent, honest Catholic.
A more nuanced and troubling point of debate is how the Church's teaching and witness could defend and extricate itself from the aid which strains of anti-Semitism in Christian history gave to Nazi anti-Semitism. In the same way, the question for Nietzscheans is not whether Nieztscheanism was proto-Nazism, but whether it has resources of its own to inoculate itself from legitimately being expressed and embodied by an Übermensch in the mold of Hitler. The consistent and radical opposition which the Church gave the Nazi party demonstrates that the Gospel is genuinely distinct from the anti-Semitic errors of past teachers. I do not however think Nietzscheanism can so easily be removed from the Hitler-ethos. |
90c262ff65671bf2 | Tell me more ×
I am reading a book on quantum field theory, while I have never been trained as a physicist. I found a big gap in language and have trouble understanding what physicists mean by "quantum field".
If I understand correctly, after quantizing twice (field and operator). quantum field should be an operator valued distribution. Am I right? I would appreciate it if some mathematician who are familiar with physics could kindly explain "quantum field" in terms of mathematics.
Thank you in advance.
share|improve this question
add comment
1 Answer
I am not sure what you math level is, so I'll try to make it as simple as possible. In standard quantum mechanics, we formalize the state of a physical system as belonging to Hilbert space. Hilbert space is basically a multidimensional space in which the number of dimensions are not discrete (such as in 3d space), but continuous. You can intuitively think of any real valued function f as a vector in Hilbert space, where the value f(x) is the coordinate of vector f along dimension x. The Schrödinger equation describes the evolution of the "quantum state" (a vector in Hilbert space). In second quantification you go a further step of abstraction. Now the "quantum field" is a mathematical object that belongs to Fock space. If it means something to you, Technically, the Fock space is (the Hilbert space completion of) the direct sum of the symmetric or antisymmetric tensors in the tensor powers of a single-particle Hilbert space H. The state of the quantum field allows you to calculate the distribution of values for repeated measurements on the system (using additional rules).
share|improve this answer
add comment
Your Answer
|
da12db65853c2e3a | Take the 2-minute tour ×
If we have a one dimensional system where the potential
$$V~=~\begin{cases}\infty & |x|\geq d, \\ a\delta(x) &|x|<d, \end{cases}$$
where $a,d >0$ are positive constants, what then is the corresponding classical case -- the approximate classical case when the quantum number is large/energy is high?
share|improve this question
What is $V$ when $x \in (-d,0) \cup (0,d)$? – C.R. Apr 27 '12 at 9:09
Did you mean "$\infty$ when $|x| > d$"? Also did you mean "$a$ when $x = 0$" i.e. $a\delta(x)$. Finally is $a$ of the order of classical energies or much less? If the latter, the system just looks like a square well with no barrier at classical energies. – John Rennie Apr 27 '12 at 9:41
Dear @Sys, it's a virtue and necessity, not a bug, that the delta-function is infinite at $x=0$. If it were finite at a single point (i.e. interval of length zero), like in your example, it would have no impact on the particle because zero times finite is zero. So your potential as you wrote it is physically identical to $V=\infty$ for $|x|<d$ and $0$ otherwise which is just a well with the standing wave energy eigenstates. The finite modification of $V$ at one point, by $a$, plays no role at all. A potential with $a\delta(x)$ in it would be another problem. – Luboš Motl Apr 27 '12 at 10:44
@LubošMotl: Thanks, actually the delta function version instead of V=a at x=0 is the right one. What is the classical limit of that? – Sys Apr 27 '12 at 11:15
@JohnRennie: I think your comment suggestion was right, that there is a delta function at x=0. – Sys Apr 27 '12 at 11:17
show 4 more comments
2 Answers 2
up vote 1 down vote accepted
Here we derive the bound state spectrum from scratch. Not surprisingly, the conclusion is that the Dirac delta potential doesn't matter in the semi-classical continuum limit, in accordance with Spot's answer.
The time-independent Schrödinger equation reads for positive $E>0$,
$$ -\frac{\hbar^2}{2m}\psi^{\prime\prime}(x) ~=~ (E-V(x))\psi(x), \qquad V(x)~:=~V_0\delta(x)+\infty \theta(|x|-d), \qquad V_0~>~0, $$
with the convention that $0\cdot \infty=0$. Define
$$v(x) ~:=~ \frac{2mV(x)}{\hbar^2}, \qquad e~:=~\frac{2mE}{\hbar^2}~>~0 \qquad k~:=~\sqrt{e}~>~0\qquad v_0 ~:=~ \frac{2mV_0}{\hbar^2}. $$
$$ \psi^{\prime\prime}(x) ~=~ (v(x)-e)\psi(x). $$
We know that the wave function $\psi$ is continuous with boundary conditions
$$\psi(x)~=0 \qquad {\rm for}\qquad |x|\geq d.$$
Also the derivative $\psi^{\prime}$ is continuous for $0<|x|<d$, and possibly has a kink at $x=0$,
$${\lim}_{\epsilon\to 0^+}[\psi^{\prime}(x)]^{x=\epsilon}_{x=-\epsilon} ~=~v_0\psi(x=0). $$
We get $$\psi_{\pm}(x)~=~A_{\pm}\sin(k(x\mp d))\qquad {\rm for } \qquad 0 \leq \pm x \leq d.$$
1. $\underline{\text{Case} ~\psi(x=0)=0}$. Then $$n~:=~\frac{kd}{\pi}~\in~ \mathbb{N}.$$ We get an odd wave function $$\psi_n(x)~\propto~\sin(kx).$$ In particularly, the odd wave functions do not feel the presence of the Dirac delta potential.
2. $\underline{\text{Case} ~\psi(x=0)\neq 0}$. Then continuity at $x=0$ implies that the wave function is even $A_{+}+A_{-}=0$. Phrased equivalently, $$\psi(x)~=~A\sin(k(|x|-d)).$$ The kink condition at $x=0$ becomes $$ v_0A\sin(-kd)~=~2kA \cos(kd), $$ or equivalently, $$ v_0\tan(kd)~=~-2k.$$ In the semiclassical continuum limit $$k \gg \frac{1}{d}, \qquad k \gg v_0,$$ this becomes $$\frac{kd}{\pi}+\frac{1}{2}~\in ~\mathbb{Z}, $$ i.e., in the semiclassical continuum limit the even wave functions do not feel the presence of the Dirac delta potential as well.
share|improve this answer
add comment
Firstly, it's easy to start off with just the Dirac delta potential and see what that does. Wiki has a nice solution for the Delta fuction potential, and I am lifting off parts of it here.
Consider a potential $V(x) = a\delta (x)$ and consider a scattering like configuration, where a plane wave $e^{ikx}$ is incident from the left.
$$ \psi(x)=\begin{cases}e^{ikx}+re^{-ikx} & x<0 \\ te^{ikx} & x> 0\end{cases} $$
By matching the boundary conditions, like on the wiki page, you get $$ t = 1+r\\ (1-\alpha)t = 1-r $$
where $$ \alpha = \frac{ 2ma}{ik\hbar^2} $$ characterizes the effect of the delta potential. Solving for $r$ and $t$, $$ t = \frac{1}{1-\alpha/2}\\ r=-\frac{\alpha/2}{1-\alpha/2} $$
Now, it is easy to see that for high incident $k$, the only effect of the dirac delta potential is to write a phase discontinuity on the wavefuction. This is because, as $k$ increases, the transmission $|t|^2=1/(1+|\alpha|^2/4)$ approaches 1, but the transmitted wavefunction gets an extra phase given by $$ \text{Arg}(t) = -\tan^{-1}(|\alpha|/2) $$
Getting back to the problem at hand, for a particle in a box (without the delta function), the allowed $k$ vectors are given by forcing the wavefunction to be zero at the walls at $x=-d$ and $x=d$, which gives us the condition
$$ k_n=\frac{\pi n}{2d} $$
If now, we add a delta potential, then for high values of $n$ (or $k$), all the delta function will do is introduce a phase discontinuity at the origin, and consequently what you should expect is that the boundary condition is matched not for $k_n$, but something slightly off $k_n+\delta k_n$, where $\delta k_n$ is a small correction due to the delta function potential. For high values of $n$, this correction would drop, as the phase discontinuity decreases, and for classical like states (very large $n$) you expect to recover 1D box states, as mentioned by John Rennie.
share|improve this answer
Thank you, Spot! – Sys Apr 27 '12 at 17:59
add comment
Your Answer
|
a59868aa905f6821 | Last month, at the joint AMS/MAA meeting in San Diego, I spoke at the AMS “Current Events” Bulletin on the topic “Why are solitons stable?“. This talk was supposed to be a survey of many of the developments on the rigorous stability theory of solitary waves in dispersive wave models (e.g. the Kortweg-de Vries equation and its generalisations, nonlinear Schrödinger equations, etc.), although my actual talk (which was the usual 50 minutes in length) only managed to cover about half of the material I had planned.
More recently, I completed the article that accompanies the talk, and which will be submitted to the Bulletin of the American Mathematical Society. In this paper I describe the key conflict in these wave models between dispersion (the tendency of waves of differing frequency to move at different speeds, thus causing any localised wave to disperse in space over time) and nonlinearity (which can cause any concentrated portion of the wave to self-amplify). Solitons seem to lie at the exact balancing point between these two forces, neither dispersing nor amplifying, but instead simply traveling at a constant velocity or oscillating in phase at a constant rate. In some cases, this balancing point is unstable; remove even a tiny amount of mass from the soliton and it eventually disperses completely into radiation, or one can add a tiny amount and cause the soliton to concentrate into a point and thence exhibit blowup in finite time. In other cases, the balancing point is stable; small perturbations to a soliton may end up changing the amplitude, position, and/or velocity of the soliton slightly, but the bulk of the solution still closely resembles a soliton in size, shape, and behaviour. Stability is sometimes enforced by linear properties, such as dispersive estimates or spectral properties of the linearised dynamics, but is also often enforced by nonlinear properties, such as nonlinear conservation laws, monotonicity formulae, and local propagation estimates for mass and energy (such as those provided by virial identities). The interplay between all these properties can be remarkably subtle, especially in the critical case when a key conserved quantity is scale-invariant (thus leading to an additional degeneracy in the soliton manifold). This is particularly evident in the remarkable series of papers by Martel and Merle establishing various stability and blowup properties near the ground state soliton of the critical generalised KdV equation, which I spend some time discussing (without going into too many of the (quite numerous) technical details). The focus in my paper is primarily on the non-integrable case, in which the techniques are primarily analytic rather than algebraic or geometric. |
ac88cff389bda3d5 |
Here are some of the non-mathematical laminae that Wolfram|Alpha knows closed-form equations for:
shape lamina
Assume we want a filled curve instead of just the curve itself. For closed curves, say the James Bond logo, we could just take the curve parametrizations and fill the curves. As a graphics object, filling a curve is easy to realize by using the FilledCurve function.
James Bond curve
007 curve
For the original curves, we had constructed closed-form Fourier series-based parametrizations. While the FilledCurve function yields a visually filled curve, it does not give us a closed-form mathematical formula for the region enclosed by the curves. We could write down contour integrals along the segment boundaries in the spirit of Cauchy’s theorem to differentiate the inside from the outside, but this also does not result in “nice” closed forms. So, for filled curves, we will use another approach, which brings us to the construction of laminae for various shapes.
The method we will use is based on constructive solid geometry. We will build the laminae from simple shaped regions that we first connect with operations such as AND or OR. In a second step, we will convert the logical operations by mathematical functions to obtain formulas of the form f(x, y) > 0 for the region that we want to describe. The method of conversion from the logical formula to an arithmetic function is based on Rvachev’s R-function theory.
Let’s now construct a geometrically simple shape using the just-described method: a Mitsubishi logo-like lamina, here shown as a reminder of how it looks.
As this sign is obviously made from three rhombi, we define a function polygonToInequality that describes the interior of a single convex polygon. A point is an interior point if it lies on the inner side of all the line segments that are the boundaries of the polygon. We test the property of being inside by forming the scalar product of the normals of the line segments with the vector from a line segment’s end point to the given point.
It is simple to write down the vertices of the three rhombi, and so a logical formula for the whole logo.
The last equation can be considerably simplified.
The translation of the logical formula into a single inequality is quite simple: we first write all inequalities with a right-hand side of zero and then translate the Or function to the Max function and the And function to the Min function. This is the central point of the Rvachev R-function theory. By using more complicated translations, we could build right-hand sides of a higher degree of smoothness, but for our purposes Min and Max are sufficient. The points where the right-hand side of the resulting inequality is greater than zero we consider part of the lamina, otherwise the points are outside. In addition to just looking nicer and more compact, the single expression, as compared to the logical formula, evaluates to a real number everywhere. This means, in addition to just a yes/no membership of a point {x,y}, we have actual function values f(x, y) available. This is an advantage, as it allows for plotting f(x, y) over an extended region. It also allows for a more efficient plotting than the logical formula because function values around f(x, y) = 0 can be interpolated.
So we obtain the following quite simple right-hand side for the inequality that characterizes the Mitsubishi logo.
And the resulting image looks identical to the one from the logical formula.
Plotting the right-hand side for the inequality as a bivariate function in 3D shows how the parts of the inequality that are positive emerge from the overall function values.
Now, this type of construction of a region of the plane through logical formulas of elementary regions can be applied to more regions and to regions of different shapes, not necessarily polygonal ones. In general, if we have n elementary building block regions, we can construct as many compound regions as there are logical functions in n variables. The function BooleanFunction enumerates all these 2^2^n possibilities. The following interactive demonstration allows us to view all 65,536 configurations for the case of four ellipses. We display the logical formula (and some equivalent forms), the 2D regions described by the formulas, the corresponding Rvachev functions, and the 3D plot of the Rvachev R-function. The region selected is colored yellow.
Cutting out a region from not just four circles, but seven, we can obtain the Twitter bird. Here is the Wolfram|Alpha formula for the Twitter bird. (Worth a tweet?)
By drawing the zero-curves of all of the bivariate quadratic polynomials that appear in the Twitter bird inequality as arguments of max and min, the disks of various radii that were used in the construction become obvious. The total bird consists of points from seven different disks. Some more disks are needed to restrict the parts used from these six disks.
Here are two 3D versions of the Twitter bird as 3D plots. As the left-hand side of the Rvachev R-equation evaluates to a number, we use this number as the value (possibly modified) in the plots.
We can also use the closed-form equation of the Twitter bird to mint a Twitter coin.
The boundary of the laminae described by Rvachev-R functions has the form f(x, y) = 0. Generalizing this to f(x, y) = g(z) naturally extrudes the 2D shape into 3D, and by using a function that increases with |z|, we obtain closed 3D surfaces. Here this is done for g(z) ~ (z^2) for the Twitter bird (we also add some color and add a cage to confine the bird). The use g(z) ~ (z^2) means z ~ ± f(x y)^(1/2) at the boundaries, and the infinite slope of the square root functions gives a smooth bird surface near the z = 0 plane.
Now we will apply the above-outlined construction idea to a slightly more complicated example: we will construct an equation for the United States’ flag. The most complicated-looking part of the construction is a single copy of the star. Using the above function polygonToInequality for the five triangle parts of the pentagram and the central pentagon, we obtain after some simplification the following form for a logical function describing a pentagram.
Here is the pentagram shown, as well the five lines that occur implicitly in the defining expression of the pentagram.
The detailed relative sizes of the stars and stripes are specified in Executive Order 10834 (“Proportions and Sizes of Flags and Position of Stars”) of the United States government. Using the data from this document and assuming a flag of height 1, it is straightforward to encode the non-white parts of the US flag in the following manner. For the parallel horizontal stripes we use a sin(a y) (with appropriately chosen a) construction. The grid of stars in the left upper corner of the flag is made from two square grids, one shifted against the other (a 2D version of an fcc lattice). The Mod function allows us to easily model the lattice arrays.
This gives the following closed-form formula for the US flag. Taking the visual complexity of the flag into account, this is quite a compact description.
Making a plot of this formula gives—by construction—the American flag.
We can apply a nonlinear coordinate transformation to the inequality to let the flag flow in the wind.
And using a more quickly varying map, we can construct a visual equivalent of Jimi Hendrix‘s “Star-Spangled Banner” from the Rainbow Bridge album.
As laminae describe regions in 2D, we can identify the plane with the complex plane and carry out conformal maps on the complex plane, such as for the square root function or the square.
Here are the four maps that we will apply to the flag.
Column transformations
And these are the conformally mapped flags.
The next interactive demonstration applies a general power function z -> (shift + scale z)α to the plane containing the flag. (For some parameter values, the branch cut of the power function can lead to folded over polygons.)
So far we have used circles and polygons as the elementary building blocks for our lamina. It is straightforward to use more complicated shapes. Let’s model a region of the plane that approximates the logo of the largest US company—the apple from Apple. As this is a more complicated shape, calculating an equation that describes it will need a bit more effort (and code). Here is an image of the shape to be approximated.
So, how could we describe a shape like an apple? For instance, one could use osculating circles and splines (see this blog entry). Here we will go another route. Algebraic curves can take a large variety of shapes. Here are some examples:
Similar to the Fourier series approximation properties that we used before for the curves, we now build on the Stone–Weierstrass theorem, which guarantees that any continuous function can be approximated by polynomials.
We we look for an algebraic curve that will approximate the central apple shape. To do so, we first extract again the points that form the boundary of the apple. (To do this, we reuse the function pointListToLines from the first blog post of this series, mentioned previously.)
We assume that the core apple shape is left-right symmetric and select points from the left side (meaning the side that does not contain the bite). The following Manipulate allows us to quickly to locate all points on the left side of the apple.
To find a polynomial p (x, y)= 0 that describes the core apple, we first use polar coordinates (with the origin at the apple’s center) and find a Fourier series approximation of the apple’s boundary in the form equation. The use of only the cosine terms guarantees the left-right symmetry of the resulting apple.
We rationalize the resulting approximation and find the corresponding bivariate polynomial in Cartesian coordinates using GroebnerBasis. After expressing the cos(kcos) terms in terms of just cos(cos) and sin(cos), we use the identity cos(cos)^2+sin(cos)^2 = 1 to eliminate cos(cos) and sin(cos) to obtain a single polynomial equation in x and y.
As we rounded the coefficients, we can safely ignore the last digits in the integer coefficients of the resulting polynomial and so shorten the result.
Here is a slightly simplified version.
Here is the resulting apple as an algebraic curve.
Now we need to take a bite on the right-hand side and add the leaf on top. For both of these shapes, we will just use circles as the geometric shapes. The following interactive Manipulate allows the positioning and sizing of the circles, so that they agree with the Apple logo. The initial values are chosen so that the circles match the original image boundaries. (We see that the imported image is not exactly left-right symmetric.)
So, we finally arrive at the following inequality describing the Apple logo.
Now that we have discussed how to make laminae of various shapes, let’s have some fun and use a 2D lamina from Wolfram|Alpha to derive a variety of new 2D and 3D images from it. Like in the case of curves, there are nearly unlimited possibilities. First, as a small reward for all of the above implemented code, let’s reward ourselves with some chocolate. Starting with the Easter bunny lamina.
We can easily extract the polygons from this 2D graphic and construct a twisted bunny in 3D.
The Rvachev R-form allows us to immediately make a 3D Easter bunny made from milk chocolate and strawberry-flavored chocolate. By applying the logarithm function, the parts where the defining function is negative are not shown in the 3D plot, as they give rise to complex-valued function values.
We can also make the Easter bunny age within seconds, meaning his skin gets more and more wrinkles as he ages. We carry out this aging process by taking the polygons that form the lamina and letting them undergo a Brownian motion in the plane.
Let’s now play with some car logo-like laminae. We take a Yamaha-like shape; here are the corresponding region and 3D plot.
We could, for instance, take the Yamaha lamina and place 3D cones in it.
And in the next example, we use a Volkswagen-like shape to construct a half-sphere with a pattern inside.
By forming a weighted mixture between the Yamaha equation and the Volkswagen equation, we can form the shapes of Yamawagen and Volksmaha.
Next we want to construct another, more complicated, 3D object from a 2D lamina. We take the Superman insignia.
superman lamina
The function superman[{x, y}] returns True if a point in the x, y plane is inside the insignia.
And here is the object we could call the Superman solid. (Or, if made from the conjectured new supersolid state of matter, a super(man)solid.) It is straightforwardly defined through the function superman. The Superman solid engraves the shape of the Superman logo in the x, y plane as well as in the x, y plane into the resulting solid.
Viewed from the front as well as from the side, the projection is the Superman insignia.
To stay with the topic of Superman, we could take the Bizarro curve and roll it around to form a Bizarro-Superman cake where Superman and Bizarro face each other as cake cross sections.
Superman cake
This cake we can then refine by adding some kryptonite crystals, here realized through elongated triangular dipyramid polyhedra.
Next, let’s use a Batman insignia-shaped lamina and make a quantum Batman out of it.
Batman lamina
We will solve the time-dependent Schrödinger equation for a quantum particle in a 2D box with the Batman insignia as the initial condition. More concretely, assume the initial wave function is 1 within the Batman insignia and 0 outside. So, the first step is the calculation of the 2D Fourier coefficients.
Numerically integrating a highly oscillating function over a domain with sharp boundaries using numerical integration can be challenging. The shape of the Batman insignia suggests that we first integrate with respect to y and then with respect to x. The lamina can be conveniently broken up into the following subdomains.
All of the integrals over y can be calculated in closed form. Here is one of the integrals shown.
To calculate the integrals over x, we need to multiply the integralsWRTy with sin(k Π x) and then integrate. Because k is the only parameter that is changing, we use the (new in Version 9) function ParametricNDSolveValue.
We calculate 200^2 Fourier coefficients. This relatively large number is needed to obtain a good solution of the Schrödinger equation. (Due to the discontinuous nature of the initial conditions, for an accurate solution, even more modes would be needed.)
Using again the function xyArray from above, here is how the Batman logo would look if it were to quantum-mechanically evolve.
We will now slowly end our brief overview on how to equationalize shapes through laminae. As a final example, we unite the Fourier series approach for curves discussed in the first blog post of this series with the Rvachev R-function approach and build an apple where the bite has the form of the silhouette of Steve Jobs, the Apple founder who suggested the name Mathematica. The last terms of the following inequality result from the Fourier series of Jobs’ facial profile.
Brilliant!!! Really impressive
Posted by Fernando July 18, 2013 at 2:22 pm Reply
And this is what mathematicians do for recreation. I didn’t understand much of it, but I found it to be a delightful stream of consciousness and a fun illustration of a practical [?] application of math priinciples using Mathematica.
Posted by Richard Johnstone July 18, 2013 at 3:33 pm Reply
Leave a Comment
(will not be published) (required)
(your comment will be held for moderation) |
167f8a3492ae14aa | Quantum Physics II - PHYS 304
Class Meeting: Tuesday 10:30 - 12:00
Thursday 10:30 - 12:00
1 – Overall Aim of Course
To give students an understanding of the use of quantum mechanics to model the structure of atoms and describe real systems and real phenomena, and learn how to do calculations in more realistic settings, in particular via various approximation methods. Knowledge of the concepts of quantum physics is fundamental to the understanding of the mechanisms used widely in modern science and technology. It also emphasizes the relevance of physics to technology, instrumentation and measurement:
2- Objectives
- Understand the operator matrices and spin with the addition of angular momenta.
- Understand the origin of quantum numbers and be able to solve the Schrödinger equation. Become familiar with atomic units and use them correctly in problem solving
- Understand the Interaction of electron with electromagnetic field and be able to use it in simple problems
- Understand the concept of perturbation theory
- Use critical thinking and scientific problem solving to make informed decisions
- Cultivate a professional attitude and develop skills relation teamwork and responsibility for individual learning
- Understand that the wavefunctions of atoms and molecules
3– Intended Learning Outcomes of Course (ILOs)
At the end of the course, students should be able to
Knowledge and Understanding
- Understand the theory of the principles of quantum physics
- Demonstrate familiarity with theories and concepts of quantum physics.
Intellectual Abilities
- Apply appropriate theories, principles and concepts relevant to the quantum physics
- Analyze and interpret information from a variety of sources relevant to the subjects of this course.
- Ability to identify, formulate, and solve problems in quantum physics
- Exercise appropriate judgment in selecting and presenting information using various methods relevant to quantum physics
Professional and Practical Competencies
- Efficient use of the techniques, skills, and tools of solving problem in quantum mechanics
Download course description as PDF
Download course specification as PDF |
6f4618bf5959c49c | Symmetry, Integrability and Geometry: Methods and Applications (SIGMA)
SIGMA 2 (2006), 064, 4 pages nlin.SI/0408027
On a 'Mysterious' Case of a Quadratic Hamiltonian
Sergei Sakovich
Institute of Physics, National Academy of Sciences, 220072 Minsk, Belarus
Received June 02, 2006, in final form July 18, 2006; Published online July 28, 2006
We show that one of the five cases of a quadratic Hamiltonian, which were recently selected by Sokolov and Wolf who used the Kovalevskaya-Lyapunov test, fails to pass the Painlevé test for integrability.
Key words: Hamiltonian system; nonintegrability; singularity analysis.
pdf (138 kb) ps (107 kb) tex (7 kb)
1. Sokolov V.V., Wolf T., Integrable quadratic classical Hamiltonians on so(4) and so(3,1), J. Phys. A: Math. Gen., 2006, V.39, 1915-1926, nlin.SI/0405066.
2. Ablowitz M.J., Ramani A., Segur H., A connection between nonlinear evolution equations and ordinary dif ferential equations of P-type. I, J. Math. Phys., 1980, V.21, 715-721.
3. Ramani A., Grammaticos B., Bountis T., The Painlevé property and singularity analysis of integrable and non-integrable systems, Phys. Rep., 1989, V.180, 159-245.
4. Tsiganov A.V., Goremykin O.V., Integrable systems on so(4) related with XXX spin chains with boundaries, J. Phys. A: Math. Gen., 2004, V.37, 4843-4849, nlin.SI/0310049.
5. Sokolov V.V., On a class of quadratic Hamiltonians on so(4), Dokl. Akad. Nauk, 2004, V.394, 602-605 (in Russian).
6. Ramani A., Dorizzi B., Grammaticos B., Painlevé conjecture revisited, Phys. Rev. Lett., 1982, V.49, 1539-1541.
7. Grammaticos B., Dorizzi B., Ramani A., Integrability of Hamiltonians with third- and fourth-degree polynomial potentials, J. Math. Phys., 1983, V.24, 2289-2295.
8. Ablowitz M.J., Clarkson P.A., Solitons, nonlinear evolution equations and inverse scattering, Cambridge, Cambridge University Press, 1991.
9. Sakovich S.Yu., Tsuchida T., Symmetrically coupled higher-order nonlinear Schrödinger equations: singularity analysis and integrability, J. Phys. A: Math. Gen., 2000, V.33, 7217-7226, nlin.SI/0006004.
10. Sakovich S.Yu., Tsuchida T., Coupled higher-order nonlinear Schrödinger equations: a new integrable case via the singularity analysis, nlin.SI/0002023.
Previous article Next article Contents of Volume 2 (2006) |
52f9f7d0ba5684e2 | Sunday, 5 October 2014
Entertainment stuff from the week 29/9 - 5/10/14
Guten Tag Forscher,
This week, some journalists noticed that some researchers have been 'sneaking' references to songs by Bob Dylan into their Paper titles. By which i mean "the researchers told a journalist that they'd been referencing Dylan songs".
Examples include 'Nitric Oxide and inflammation: The answer is blowing in the wind.' and 'Blood on the Tracks: A Simple Twist of Fate?'
When other researchers heard about the competition, totalling four competitors, they decided on a frivolous bet (as is the wont of many researchers) that the person to have sneaked in the most references by the time they retire, would earn a free lunch at a restaurant in Solna, north of Stockholm, where the university is based.
This isn't the only case of humour in scientific literature, however. Oh no! Some of these might not be deliberate, but they are all amusing:
'Recursive fury: conspiracist ideation in the blogosphere in response to research on conspiracist ideation.'
'Would Bohr be born if Bohm were born before Born?'
{Its explanation, by author Hrvoje Nikolic, being his attempt to compare the work of the quantum physicists David Bohm and Max Born: "I discuss a hypothetical historical context in which a Bohm-like deterministic interpretation of the Schrödinger equation is proposed before the Born probabilistic interpretation and argue that in such a context the Copenhagen (Bohr) interpretation would probably have not achieved great popularity among physicists."}
'‘Christ fucking shit merde!' Language preferences for swearing among maximally proficient multilinguals'
'An analysis of the forces required to drag sheep over various surfaces'
'The case of the disappearing teaspoons: longitudinal cohort study of the displacement of teaspoons in an Australian research institute'
'Sex with knockout models: behavioral studies of estrogen receptor alpha'
'The Origin of Chemical Elements'
by Alpher, Bethe, and Gamow
'Contrastive Focus Reduplication in English (The Salad-Salad Paper)'
'When Zombies Attack!: Mathematical Modelling of an Outbreak of Zombie Infection'
'The first case of homosexual necrophilia in the mallard Anas platyrhynchos (Aves: Anatidae)'
'Hydraulic compression of mice to 166 atmospheres'
'Light-dependent homosexual activity in males of a mutant of Drosophila melanogaster'
'Sexual harassment of a king penguin by an Antarctic fur seal'
'Der unsachgemäße Gebrauch eines Penisringes aus Titan'
(Improper use of a penis ring made from titanium)
'Destruction of Nuclear Bombs Using Ultra-High Energy Neutrino Beam'
{This one's my favourite, LOL}
This announcement is less comic though... kinda ;-)
A professor at UNC-Chapel Hill, in the College of Arts and Sciences, has announced that she has proved black holes to not exist!
This announcement's come as something of a surprise to the Physics world. Especially to those who've been studying them for decades.
She contends that her calculations show that a collapsing core releases enough Hawking radiation to reduce the mass of the core to the point that a conventional Black Hole cannot form, which includes of course its singularity and event horizon.
Even more staggeringly, she claims that her 'proof' unites the Theories of Relativistic Physics and Quantum Physics - something there would definitely be a Nobel Prize in.
Conjectures such as these date back further than since the first Back Hole was observed to be distorting spacetime, so there's really nothing new about it.
But the fact that this turns out to be tired old baseless conjecture hasn't stopped gullible journos jumping on a bandwagon... even the ones that claim science interest.
For more details, just read Bob Novella's article, linked ^ up there.
So much for attracting female talent into STEM, LOL.
"If you want to follow other women into STEM, for a life of infamous bullshit, then step this way" :-D
Here's another 'interesting' one:
Prince recently had a Facebook-based chat session, with fans, in which he apparently answered just one question, in 3 hours. That question was:
"Please address the importance of ALL music being tuned to 432hz sound frequencies???"
If you can call that a question. It's more of an entreatment than a query.
Prince simply replied "The Gold Standard", with a link.
By coincidence, this week's SGU episode answered a question about the 432 Hz tuning thing:
"I'm a software engineer who writes music as a hobby. Recently I have bumped into the topic of 432 Hz tuning in music. You can find a lot of 'information' about this all over the internet. The basic premise here that once upon a time, musical instruments were all tuned so that the note A4's frequency was 432 Hz, which is 'said to be mathematically consistent with the patterns of the universe'. Then Nazi Germany came along and deliberately changed the standard to 440 Hz which is the most common tuning today, 'after conducting scientific researches to determine which range of frequencies best induce fear and aggression'. I was pretty saddened to find out that there are musicians who actually believe all of this. The 'advocates' of this even go as far as claiming that listening to music tuned to 432 Hz can cure cancer and other medical conditions. I believe that this topic might be an interesting one for you guys to discuss in the podcast. Please feel free to contact me if you need more information on this subject."
Apparently, this factoid is popularly believed by musical people. Bless their little cotton socks, they're not very skeptical, are they!
The very idea that there's a pitch 'anchor' to any musical context is completely erroneous. You can put any song in any key as long as they're all in the same one - you can move all the notes up, and all the notes down - and it will make no difference, other than to how easy you find it to sing the song.
The whole idea of Nazis coming along and changing 'it' from 432 to 440 Hz is complete baloney.
The notion of 432 Hz being 'the best' comes from a puerile idea of a universal resonant harmonic 'running through the universe' which bears no relation to actual sound.
Different materials have different resonant frequencies, and so mixtures of materials (such as the human body, and especially the entire universe) will have a broad variety of harmonics that contradict each other. This means there is no such thing as a universal resonance, and it's why you can find a resonant frequency for a tuning fork or Triangle (uniform metal, with a symmetrical shape) but not for your finger (lots of different proteins and things, that wobble about and absorb the vibrations).
Of course, the idea of 'natural' vibrations is one thrown around gaily by Newagers, and so you can easily find plenty of woo-woo quack claims about finding inner harmony, restoring your energy levels, etc etc.
None of their claims have anything to do with reality.
And that includes all of the people purporting to recentre music around 432 Hz instead of 440 Hz. What are they going to do? Just transpose everything down a bit? 440 Hz is 'A' whereas 432 Hz is... a flat 'A'. The next note down is G# at 415.3 Hz. What's the point of making all music just slightly flatter??
It should be no surprise that there are similar factoids floating around, about the Schumann Resonance (atmospheric wobbling).
And it should be no surprise that progenitors of such ideas frequently equivocate between sound and light - radio waves are electromagnetic (light) and thereby not sound.
Sound, of course, requires a physical medium to propagate, whereas light requires only spacetime.
There are various charlatans selling trinkets and all kinds of garbage, on the basis of vibrational resonances and things... and sometimes on the grounds that they protect you from WiFi and 'stuff like that'.
Nope. WiFi is microwaves - that's light - not sound. Not vibration. It's completely different.
Stay skeptical, people. Stay skeptical :-)
Did you know that both The Guinness Book of World Records and CERN are both celebrating their 60th anniversaries this year? Well, you do now.
"“It’s important that the Guinness World Records book continues to monitor these fundamental science superlatives,” said the book’s Editor-in-Chief Craig Glenday. “The fact that CERN was acknowledged in our very first edition 60 years ago and continues to break records in our latest edition is testament to the importance of this international scientific effort. It’s been a privilege to visit the Large Hadron Collider and present the team leaders’ their certificates, and I’m sure there will be plenty more record-breaking at CERN in its next 60 years.”"
Hear, hear :-)
In other news:
Does the fish in your fridge glow? Researchers in New Zealand are investigating the case of a woman who noticed her dog food was glowing blue. It's most likely to be caused by bioluminescent bacteria, that live in the sea. However, without oxygen they can't respire, and if they can't respire they can't produce the light, so people are unlikely to notice. Forget glowing cats - glowing fishy food predates human experimentation by a long way, LOL
A superstitious old lady in Bosnia thinks she can cure eye complaints by licking people's eyes. This is an infection risk whether she washes her tongue with alcohol (as described) or not. I still go by the advice that you should never use the same hand to wipe both eyes, in order to stem any budding infection's spread. But i'm pretty sure licking's going to do nothing for cataracts!
Ivan Trifonov has (maybe) become the first person ever to fly a hot air balloon into a cave. He did so with a 25 minute trip into Mamet Cave, on Velebit Mountain, Croatia, using a specially-designed balloon and frame. He's currently holds records in the Guinness Book Of World Records for flying over the North Pole and the South Pole, and has apparently also flown over the Mediterranean Sea, Jerusalem, the Great Wall of China, and The Kremlin.
Yet another dowser has been prosecuted for fraud, in the UK. Following on from the cases of Kim McCormick and Gary Bolton, Samuel Tree and his wife Joan have been prosecuted for selling fake bomb detectors. Make no mistake: they were not selling bomb detectors, but nor were they selling golf ball detectors, or any kind of detector - the crime was in selling something on the basis of dowsing pseudoscience.
A Pope believes in angels. In other news, bears shit in the woods. Superstitionists will superstitiously believe in phantasms :-D
A journalist at the Mirror has groomed a Tory MP into showing them their John-Thomas, and then published the entire affair with the defence that publication was in 'public interest'. I find it hard to have sympathy for any kind of Polly, let alone a Tory one, but when you've been manipulated into doing something embarrassing (but not illegal) that's called 'entrapment'. What it's not called, is 'public service investigative journalism'. If this were done on a teenage girl, there would be outrage, and the Mirror would never have touched the story (i hope) but because this was a grown man pretending to be a young girl, seducing an old man... the old man's the perpetrator? Hmmm....
A Scottish man has behaved in a threatening way, earning him a fine of £200. But the interesting thing about the story, is that he used a spade to do it - in fact, he banged a spade against a radiator. Such is the intellectual height of journalism today, the Paper helpfully supplied a picture of a spade, for its readers. They're all currently wondering what a radiator looks like :-P
Has a Japanese zoo really been trying to mate two male hyaenas, for the last four years? I don't know for sure, but i do know that female hyaenas possess a pseudopenis, which is essentially an inverted vaginal wall. Hyaenas have a very matriarchal society, and so denying males sex is part of the matriarchs retaining power. Unfortunately, this means humans find it very difficult to tell male from female hyaenas, so this story is very plausible, although i would have expected them to do a sex check while they were transporting the hyaenas to the zoo from South Korea. Then again, i've never run a zoo, LOL
'Get set...Demonstrate Chip Pan Fire'
There are plenty of potentially-dangerous chemicals available for domestic use. Pollies who try to ban dangerous substances don't seem to understand that :-D
'Get set...Demonstrate Iodine Clock'
'Simplest DIY Speaker'
'Why does our hair turn grey? - A Week in Science'
'Astronomers LOVE Acronyms'
'John and Kevin's Sunday Papers - September'
'Sir Roger Moore shunned scotch egg for ham hock terrine'
Stop the press!! LMAO
'Richard Herring's Meaning of Life - Episode 4 - Death'
'Mitch Benn - Can We Come With You? (unbroadcast)' (my upload)
Word Of The Week: inamorata -- a woman with whom one is in love
Expression Of The Week: 'his nibs' -- a mildly derisive term for someone in a position of authority; the term 'nibs' probably derives from the term 'nob' (or 'knob') which means 'head' (later, developing the euphemistical sense for male genitals) and of course 'head' also refers to someone in a position of authority
Quote Of The Week: "Get those fucking nuns away from me!" - Norman Douglas' last words (author of South Wind, 1917)
'ISIHAC Live on Stage'
Actual video of the actual Clue teams, and Humph, on stage, weaving their magic. Wow :-D
Have some beautiful geology. I don't usually post this content, from my tumblr, but here's some to fill up the page :-P
'Ethiopian opal geode'
'Koroit opal'
'Calcite with quartz from Huanggang Mine'
'Hand-shaped aragonite formation'
'Rainbow Aura Quartz'
'Rainbow obsidian blades'
'Angel Aura Quartz Crystals'
'Ice cave in Iceland'
'Ice Caves Around the World'
'A stone rainbow'
'Pillar engraving'
'Bryce Canyon National Park'
'Tungurahua Volcano Vertical'
No comments:
Post a Comment |
17d66a8f45df81ff |
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
Q on solution of Spherical Harmonic
1. Dec 26, 2005 #1
I read the solution for spherical harmonic using associated Legendre polynomials, and am wondering....
For example, a solution is written in here, but I wonder why the constant in forumula (3) can be determined as [tex]-m^2[/tex], as a negative value of square of an integer.
Similar thing applies to the [tex]r[/tex] variable (in the above page it doesn't appear, though),
[tex]{r(\frac {\partial^2} {{\partial r}^2})(rR(r))} / {R(r)} = l(l+1)[/tex]
Here, I can understand this should be a constant, but cannot understand why it's a form of [tex]l(l+1)[/tex] where [tex]l[/tex] is an integer....
Will anyone tell me??
Thanks in advance!
Last edited: Dec 26, 2005
2. jcsd
3. Dec 26, 2005 #2
We know "the constant" in eqn (3) is constant, so we have the freedom to call it *whatever* we want, as long as it is indeed constant. Calling it -m^2 is just a trick because it simplifies the algebra later on (when you get the equation (4)).
Another way to explain it is that the differential equation:
(d^2 X)/(dY^2) = (-c^2)*X
Is a pretty standard one (ie, it crops up a lot in various bits of physics and maths), and we know the solutions to it (given by eqn (4) on your link). So when you come across the equation:
(d^2 X)/(dY^2) = S*X , S=const
You recall the other differential equation and see it's better to write S as -m^2 because then we immediatly have the solutions because it's just a standard differential equation.
As for the other question, I suspect it's for the same reason, but I'm not sure.
4. Dec 26, 2005 #3
Good question. Well, actually, you can take any constant that you want. The fact that -m² is taken, is a direct consequence of , err, some mathematical "playing"...Here it goes :
[tex]L_z \Phi_m( \phi) = m \hbar \Phi_m( \phi)[/tex]
[tex]-i \frac {\partial}{\partial \phi} \Phi_m( \phi) = m \Phi_m( \phi)[/tex]
[tex]-i \frac {\partial ^2}{{\partial \phi}^2} \Phi_m( \phi) = m \frac {\partial}{\partial \phi}\Phi_m( \phi) = \frac {m^2}{-i} \Phi_m ( \phi)[/tex]
[tex]\frac{1}{\Phi_m (\phi)} \frac{\partial ^2}{{\partial \phi}^2} \Phi_m ( \phi) = \frac{m^2}{i^2} = -m^2[/tex]
edit : if i can make a suggestion : you should not be studying this from a website, nomatter how reliable it is. What books are you using for your QM course ?
Last edited: Dec 26, 2005
5. Dec 26, 2005 #4
Correct. There are several ways to explain this :
1) structure of the Legendre Polynomials
2) symmetry of the physics at hand (ie the orbitals) : grouptheory
3) combining following two aspects:
a) the eigen equations for [tex]L^2[/tex] and [tex]L_z[/tex]
b) [tex] <L^2> \geq <L_z^2>[/tex] because [tex]L_x[/tex] and [tex]L_y[/tex] are Hermitian
edit : again, do not use an internet site as your primary source of study
Last edited: Dec 26, 2005
6. Dec 26, 2005 #5
Thanks all! I think I've understood: My understanding is, all the solutions of Laplace equation in spherial coordiates are represented with these m and l using associated polynomials, so these "discrete constants" are proved to be enough to solve this equation.
As you all say, if these discrete values are just for convenience (just to simplify), it's enough to me.
I'm now mainly reading Greiner's "Quantum Mechanics: an introduction" because it has a lot of calculations and exercises.
Last edited: Dec 26, 2005
7. Dec 26, 2005 #6
You mean Schrödinger equation, right ?
Well, be careful with this vocabularium. It is not just about "convenience". These values are used for a specific reason : because they are building blocks of the correct description of nature. One can, and one DOES, prove them using one of the three systems that i outlined in my previous post. Which of the two is your book using ? Probably the second one if it is introductory.
Last edited: Dec 26, 2005
Similar Discussions: Q on solution of Spherical Harmonic
1. Spherical Harmonics (Replies: 1)
2. Spherical harmonics (Replies: 0)
3. Spherical Harmonics (Replies: 3)
4. Spherical Harmonics (Replies: 3) |
af15d651c859dfc7 | Welcome to infty.net
Numerical and scientific software
The following are some numerical, scientific and utility codes that I am in the process of making available:
Numerical/Scientific codes:
1. Gaussian quadrature rules for classical orthogonal polynomials
2. Noninteracting fermions in the canonical ensemble
3. Eigenfunctions of 1-d Schrödinger equation
4. Young's natural representation of the symmetric group
5. Polynomial fitting
6. Fermion diagonalization in the m-scheme
Other useful codes:
1. Options processing for Fortran
2. Emacs script for trivial input of greek & mathematical symbols
3. Using unicode symbols (greek letters, etc.) in source code
4. Miscellaneous Fortran routines
Several of the above codes require LAPACK to be installed. Most are written in modern Fortran and may be compiled with the excellent gfortran (version 4.6 or greater) or Intel Fortran compilers. (Other compilers should work but have not been tested.)
Most of the source code available through this website is licensed under the MIT license, meaning modification and redistribution for any purpose is permitted so long as the copyright, license and disclaimer information is retained. Please see the individual source files for more info.
Chris N. Gilbreth This is the personal website of Christopher N. Gilbreth. I am currently a postdoctoral associate at Yale University in the Physics department, where I study theoretical approaches to the quantum many-body problem. My current focus is on quantum Monte Carlo and cold atomic gases.
You can reach me by sending an email to the unique address of the form XYZ@gmail.com, where X=c, Y=n, and Z=gilbreth (i.e. my first two initials, followed by my last name, with no punctuation).
Bug reports and other helpful comments are greatly appreciated.
Thank you for visiting! |
ce7e339cfed7d512 | Monday, February 27, 2017
Questions related to the twistor lift of Kähler action
During last couple years a kind of palace revolution has taken place in the formulation and interpretation of TGD. The notion of twistor lift and 8-D generalization of twistorialization have dramatically simplified and also modified the view about what classical TGD and quantum TGD are.
The notion of adelic physics suggests the interpretation of scattering diagrams as representations of algebraic computations with diagrams producing the same output from given input are equivalent. The simplest possible manner to perform the computation corresponds to a tree diagram. As will be found, it is now possible to even propose explicit twistorial formulas for scattering formulas since the horrible problems related to the integration over WCW might be circumvented altogether.
From the interpretation of p-adic physics as physics of cognition, heff/h=n could be interpreted as the order of Galois group. Discrete coupling constant evolution would correspond to phase transitions changing the extension of rationals and its Galois group. TGD inspired theory of consciousness is an essential part of TGD and the crucial Negentropy Maximization Principle in statistical sense follows from number theoretic evolution as increase of the order of Galois group for extension of rationals defining adeles.
During the re-processing of the details related to twistor lift, it became clear that the earlier variant for the twistor lift can be criticized and allows an alternative. This option led to a simpler view about twistor lift, to the conclusion that minimal surface extremals of Kähler action represent only asymptotic situation near boundaries of CD (external particles in scattering), and also to a re-interpretation for the p-adic evolution of the cosmological constant: cosmological term would correspond to the entire 4-D action and the cancellation of Kähler action and cosmological term would lead to the small value of the effective cosmological constant. The pleasant observation was that the correct formulation of 6-D Kähler action in the framework of adelic physics implies that the classical physics of TGD does not depend on the overall scaling of Kähler action but that quantum classical
correspondence implies this dependence. It is however too early to select between the two options.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Wednesday, February 22, 2017
Questions related to the quantum aspects of twistorialization
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Monday, February 13, 2017
A new view about color, color confinement, and twistors
To my humble opinion twistor approach to the scattering amplitudes is plagued by some mathematical problems. Whether this is only my personal problem is not clear (notice that this posting is a corrected version of earlier).
1. As Witten shows, the twistor transform is problematic in signature (1,3) for Minkowski space since the the bi-spinor μ playing the role of momentum is complex. Instead of defining the twistor transform as ordinary Fourier integral, one must define it as a residue integral. In signature (2,2) for space-time the problem disappears since the spinors μ can be taken to be real.
2. The twistor Grassmannian approach works also nicely for (2,2) signature, and one ends up with the notion of positive Grassmannians, which are real Grassmannian manifolds. Could it be that something is wrong with the ordinary view about twistorialization rather than only my understanding of it?
3. For M4 the twistor space should be non-compact SU(2,2)/SU(2,1)× U(1) rather than CP3= SU(4)/SU(3)× U(1), which is taken to be. I do not know whether this is only about short-hand notation or a signal about a deeper problem.
4. Twistorilizations does not force SUSY but strongly suggests it. The super-space formalism allows to treat all helicities at the same time and this is very elegant. This however forces Majorana spinors in M4 and breaks fermion number conservation in D=4. LHC does not support N=1 SUSY. Could the interpretation of SUSY be somehow wrong? TGD seems to allow broken SUSY but with separate conservation of baryon and lepton numbers.
In number theoretic vision something rather unexpected emerges and I will propose that this unexpected might allow to solve the above problems and even more, to understand color and even color confinement number theoretically. First of all, a new view about color degrees of freedom emerges at the level of M8.
1. One can always find a decomposition M8=M20× E6 so that the complex light-like quaternionic 8-momentum restricts to M20. The preferred octonionic imaginary unit represent the direction of imaginary part of quaternionic 8-momentum. The action of G2 to this momentum is trivial. Number theoretic color disappears with this choice. For instance, this could take place for hadron but not for partons which have transversal momenta.
2. One can consider also the situation in which one has localized the 8-momenta only to M4 =M20× E2. The distribution for the choices of E2 ⊂ M20× E2=M4 is a wave function in CP2. Octonionic SU(3) partial waves in the space CP2 for the choices for M20× E2 would correspond ot color partial waves in H. The same interpretation is also behind M8-H correspondence.
3. The transversal quaternionic light-like momenta in E2⊂ M20× E2 give rise to a wave function in transversal momenta. Intriguingly, the partons in the quark model of hadrons have only precisely defined longitudinal momenta and only the size scale of transversal momenta can be specified.
The introduction of twistor sphere of T(CP2) allows to describe electroweak charges and brings in CP2 helicity identifiable as em charge giving to the mass squared a contribution proportional to Qem2 so that one could understand electromagnetic mass splitting geometrically.
The physically motivated assumption is that string world sheets at which the data determining the modes of induced spinor fields carry vanishing W fields and also vanishing generalized Kähler form J(M4) +J(CP2). Em charge is the only remaining electroweak degree of freedom. The identification as the helicity assignable to T(CP2) twistor sphere is natural.
4. In general case the M2 component of momentum would be massive and mass would be equal to the mass assignable to the E6 degrees of freedom. One can however always find M20× E6 decomposition in which M2 momentum is light-like. The naive expectation is that the twistorialization in terms of M2 works only if M2 momentum is light-like, possibly in complex sense. This however allows only forward scattering: this is true for complex M2 momenta and even in M4 case.
The twistorial 4-fermion scattering amplitude is however holomorphic in the helicity spinors λi and has no dependence on λtilde;i. Therefore carries no information about M2 mass! Could M2 momenta be allowed to be massive? If so, twistorialization might make sense for massive fermions!
M20 momentum deserves a separate discussion.
1. A sharp localization of 8-momentum to M20 means vanishing E2 momentum so that the action of U(2) would becomes trivial: electroweak degree of freedom would simply disappear, which is not the same thing as having vanishing em charge (wave function in T(CP2) twistorial sphere S2 would be constant). Neither M20 localization nor localization to single M4 (localization in CP2) looks plausible physically - consider only the size scale of CP2. For the generic CP2 spinors this is impossible but covariantly constant right-handed neutrino spinor mode has no electro-weak quantum numbers: this would most naturally mean constant wave function in CP2 twistorial sphere.
For the preferred extremals of twistor lift of TGD either M4 or CP2 twistor sphere can effectively collapse to a point. This would mean disappearence of the degrees of freedom associated with M4 helicity or electroweak quantum numbers.
2. The localization to M4⊃ M20 is possible for the tangent space of quaternionic space-time surface in M8. This could correlate with the fact that neither leptonic nor quark-like induced spinors carry color as a spin like quantum number. Color would emerge only at the level of H and M8 as color partial waves in WCW and would require de-localization in the CP2 cm coordinate for partonic 2-surface. Note that also the integrable local decompositions M4= M2(x)× E2(x) suggested by the general solution ansätze for field equations are possible.
3. Could it be possible to perform a measurement localization the state precisely in fixed M20 always so that the complex momentum is light-like but color degrees of freedom disappear? This does not mean that the state corresponds to color singlet wave function! Can one say that the measurement eliminating color degrees of freedom corresponds to color confinement. Note that the subsystems of the system need not be color singlets since their momenta need not be complex massless momenta in M20. Classically this makes sense in many-sheeted space-time. Colored states would be always partons in color singlet state.
4. At the level of H also leptons carry color partial waves neutralized by Kac-Moody generators, and I have proposed that the pion like bound states of color octet excitations of leptons explain so called lepto-hadrons. Only right-handed covariantly constant neutrino is an exception as the only color singlet fermionic state carrying vanishing 4-momentum and living in all possible M20:s, and might have a special role as a generator of supersymmetry acting on states in all quaternionic subs-spaces M4.
5. Actually, already p-adic mass calculations performed for more than two decades ago forced to seriously consider the possibility that particle momenta correspond to their projections o M20⊂ M4. This choice does not break Poincare invariance if one introduces moduli space for the choices of M20⊂ M4 and the selection of M20 could define quantization axis of energy and spin. If the tips of CD are fixed, they define a preferred time direction assignable to preferred octonionic real unit and the moduli space is just S2. The analog of twistor space at space-time level could be understood as T(M4)=M4× S2 and this one must assume since otherwise the induction of metric does not make sense.
What happens to the twistorialization at the level of M8 if one accepts that only M20 momentum is sharply defined?
1. What happens to the conformal group SO(4,2) and its covering SU(2,2) when M4 is replaced with M20⊂ M8? Translations and special conformational transformation span both 2 dimensions, boosts and scalings define 1-D groups SO(1,1) and R respectively. Clearly, the group is 6-D group SO(2,2) as one might have guessed. Is this the conformal group acting at the level of M8 so that conformal symmetry would be broken? One can of course ask whether the 2-D conformal symmetry extends to conformal symmetries characterized by hyper-complex Virasoro algebra.
2. Sigma matrices are by 2-dimensionality real (σ0 and σ3 - essentially representations of real and imaginary octonionic units) so that spinors can be chosen to be real. Reality is also crucial in signature (2,2), where standard twistor approach works nicely and leads to 3-D real twistor space.
Now the twistor space is replaced with the real variant of SU(2,2)/SU(2,1)× U(1) equal to SO(2,2)/SO(2,1), which is 3-D projective space RP3 - the real variant of twistor space CP3, which leads to the notion of positive Grassmannian: whether the complex Grassmannian really allows the analog of positivity is not clear to me. For complex momenta predicted by TGD one can consider the complexification of this space to CP3 rather than SU(2,2)/SU(2,1)× U(1). For some reason the possible problems associated with the signature of SU(2,2)/SU(2,1)× U(1) are not discussed in literature and people talk always about CP3. Is there a real problem or is this indeed something totally trivial?
3. SUSY is strongly suggested by the twistorial approach. The problem is that this requires Majorana spinors leading to a loss of fermion number conservation. If one has D=2 only effectively, the situation changes. Since spinors in M2 can be chosen to be real, one can have SUSY in this sense without loss of fermion number conservation! As proposed earlier, covariantly constant right-handed neutrino modes could generate the SUSY but it could be also possible to have SUSY generated by all fermionic helicity states. This SUSY would be however broken.
4. The selection of M20 could correspond at space-time level to a localization of spinor modes to string world sheets. Could the condition that the modes of induced spinors at string world sheets are expressible using real spinor basis imply the localization? Whether this localization takes place at fundamental level or only for effective action being due to SH, is a question to be settled. The latter options looks more plausible.
To sum up, these observation suggest a profound re-evalution of the beliefs related to color degrees of freedom, to color confinement, and to what twistors really are.
For details see the new chapter Some Questions Related to the Twistor Lift of TGD of "Towards M-matrix" of "Towards M-matrix" or the article Some questions related to the twistor lift of TGD.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Friday, February 10, 2017
How does the twistorialization at imbedding space level emerge?
One objection against twistorialization at imbedding space level is that M4-twistorialization requires 4-D conformal invariance and massless fields. In TGD one has towers of particle with massless particles as the lightest states. The intuitive expectation is that the resolution of the problem is that particles are massless in 8-D sense as also the modes of the imbedding space spinor fields are. M8-H duality indeed provides a solution of the problem. Massless quaternionic momentum in M8 can be for a suitable choice of decomposition M8= M4× E4 be reduce to massless M4 momentum and one can describe the information about 8-momentum using M4 twistor and CP2 twistor.
Second objection is that twistor Grassmann approach uses as twistor space the space T1(M4) =SU(2,2)/SU(2,1)× U(1) whereas the twistor lift of classical TGD uses T(M4)=M4× S2. The formulation of the twistor amplitudes in terms of strong form of holography (SH) using the data assignable to the 2-D surfaces - string world sheets and partonic 2-surfaces perhaps - identified as surfaces in T(M4)× T(CP2) requires the mapping of these twistor spaces to each other - the incidence relations of Penrose indeed realize this map.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Wednesday, February 08, 2017
Twistor lift and the reduction of field equations by SH to holomorphy
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Tuesday, February 07, 2017
Mystery: How Was Ancient Mars Warm Enough for Liquid Water?
The article Mars Mystery: How Was Ancient Red Planet Warm Enough for Liquid Water? tells about a mystery related to the ancient presence of water at the surface of Mars. It is now known that the surface of Mars was once covered with rivers, streams, ponds, lakes and perhaps even seas and oceans. This forces to consider the possibility there was once also life in Mars and might be still. There is however a problem. The atmosphere probably contained hundreds of times less carbon dioxide than needed to keep it warm enought for liquid water to last. There are how these signature of flowing water there. Here is one more mystery to resolve.
I proposed around 2014 TGD version of Expanding Earth Hypothesis stating that Earth has experienced a geologically fast expansion period in its past. The radius of the Earth's space-time sheet would have increased by a factor of two from its earlier value. Either p-adic length scale or heff/h=n for the space-time sheet of Earth or both would have increased by factor 2.
This violent event led to the burst of underground seas of Earth to the surface with the consequence that the rather highly developed lifeforms evolved in these reservoirs shielded from cosmic rays and UV radiation burst to the surface: the outcome was what is known as Cambrian explosion. This apparent popping of advanced lifeforms out of nowhere explains why the earlier less developed forms of these complex organisms have not been found as fossile. I have discussed the model for how life could have evolved in underground water reservoirs here.
The geologically fast weakening of the gravitational force by factor 1/4 at surface explains the emergence of gigantic life forms like sauri and even ciant crabs. Continents were formed: before this the crust was like the surface of Mars now. The original motivation of EEH indeed was that the observation that the continents of recent Earth seem to fit nicely together if the radius were smaller by factor 1/2. This is just a step further than Wegener went at his time. The model explains many other difficult to understand facts and forces to give up the Snowball Earth model. The recent view about Earth before Cambrian Explosion is very different from that provided by EEH. The period of rotation of Earth was 4 times shorter than now - 6 hours - and this would be visible of physiology of organisms of that time. Whether it could have left remnants to the physiology and behavior of recently living organisms is an interesting question.
What about Mars? Mars now is very similar to Earth before expansion. The radius is one half of Earth now and therefore same as the radius of Earth before the Cambrian Explosion! Mars is near Earth so that its distance from Sun is not very different. Could also recent Mars contain complex life forms in water reservoirs in its interior. Could Mother Mars (or perhaps Martina, if the red planet is not the masculine warrior but pregnant mother) give rise to their birth? The water that has appeared at the surface of Mars could have been a temporarily leakage. An interesting question is whether the appearance of water might correspond to the same event that increased the radius of Earth by factor two.
Magnetism is important for life in TGD based quantum biology. A possible problem is posed by the very weak recent value of the magnetic field of Mars. The value of the dark magnetic field Bend of Earth deduced from the findings of Blackman about effects of ELF em fields on vertebrate brain has strength, which is 2/5 of the nominal value of BE. Hence the dark MBs of living organisms perhaps integrating to dark MB of Earth seem to be entities distinct from MB of Earth. Could also Mars have dark magnetic fields?
Schumann resonances might be important for collective aspects of consciousness. In the simplest model for Schumann resonances the frequencies are determined solely by the radius of Mars and would be 2 times those in Earth now. The frequency of the lowest Schumann resonance would be 15.6 Hz.
For background see the chapters Expanding Earth Model and Pre-Cambrian Evolution of Continents, Climate, and Life and More Precise TGD Based View about Quantum Biology and Prebiotic Evolution of "Genes and Memes" .
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Monday, February 06, 2017
Chemical qualia as number theoretical qualia?
Certain FB discussions led to a realization that chemical senses (perception of odours and tastes) might actually be or at least include number theoretical sensory qualia providing information about the distribution of Planck constants heff/h=n identifiable as the order of Galois group for the extension of rationals characterizing adeles.
See the article Chemical qualia as number theoretical qualia?.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Thursday, February 02, 2017
Anomaly in neutron lifetime as evidence for the transformation of protons to dark protons
I found a popular article about very interesting finding related to neutron lifetime (see this). Neutron lifetime turns out tobe by about 8 seconds shorter, when measured by looking what fraction of neutrons disappears via decays in box than by measuring the number of protons produced in beta decays for a neutron beam travelling through a given volume. The life time of neutron is about 15 minutes so that relative lifetime difference is about 8/15×60 ≈ .8 per cent. The statistical signficance is 4 sigma: 5 sigma is accepted as the significance for a finding acceptable as discovery.
How could one explain the finding? The difference between the methods is that the beam experiment measures only the disappearences of neutrons via beta decays producing protons whereas box measurement detects the outcome from all possible decay modes. The experiment suggests two alternative explanations.
1. Neutron has some other decay mode or modes, which are not detected in the box method since one measures the number of neutrons in initial and final state. For instance, in TGD framework one could think that the neutrons can transform to dark neutrons with some rate. But it is extremely unprobable that the rate could be just about 1 per cent of the decay rate. Why not 1 millionth? Beta decay must be involved with the process.
Could some fraction of neutrons decay to dark proton, electron, and neutrino: this mode would not be detected in beam experiment? No, if one takes seriously the basic assumption that particles with different value of heff/h= n do not appear in the same vertex. Neutron should first transform to dark proton but then also the disappearance could take place also without the beta decay of dark proton and the discrepancy would be much larger.
2. The proton produced in the ordinary beta decay of proton can however transform to dark proton not detected in the beam experiment! This would automatically predict that the rate is some reasonable fraction of the beta decay rate.
About 1 percent of the resulting protons would transform to dark protons. This makes sense!
What is so nice is that the transformation of protons to dark protons is indeed the basic mechanism of TGD inspired quantum biology! For instance, it would occur in Pollack effect in with irradiation of water bounded by gel phase generates so called exclusion zone, which is negatively charged. TGD explanation is that some fraction of protons transforms to dark protons at magnetic flux tubes outside the system. Negative charge of DNA and cell could be due to this mechanism. One also ends up to a model of genetic code with the analogs of DNA, RNA, tRNA and amino-acids represented as triplets of dark protons. The model predicts correctly the numbers of DNAs coding given amino-acid. Besides biology the model has applications to cold fusion, and various free energy phenomena.
See the article Two different lifetimes for neutron as evidence for dark protons and chapter New Particle Physics Predicted by TGD: Part I.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Why metabolism and what happens in bio-catalysis?
TGD view about dark matter gives also a strong grasp to metabolism and bio-catalysis - the key elements of biology.
Why metabolic energy is needed?
The simplest and at the same time most difficult question that innocent student can make about biology class is simple: "Why we must eat?". Or using more physics oriented language: "Why we must get metabolic energy?". The answer of the teacher might be that we do not eat to get energy but to get order. The stuff that we eat contains ordered energy: we eat order. But order in standard physics is lack of entropy, lack of disorder. Student could get nosy and argue that excretion produces the same outcome as eating but is not enough to survive.
We could go to a deeper level and ask why metabolic energy is needed in biochemistry. Suppose we do this in TGD Universe with dark matter identified as phases characterized by heff/h=n.
1. Why metabolic energy would be needed? Intuitive answer is that evolution requires it and that evolution corresponds to the increase of n=heff/h. To see the answer to the question, notice that the energy scale for the bound states of an atom is proportional to 1/h2 and for dark atom to 1/heff2 ∝ n2 (do not confuse this n with the integer n labelling the states of hydrogen atom!).
2. Dark atoms have smaller binding energies and their creation by a phase transition increasing the value of n demands a feed of energy - metabolic energy! If the metabolic energy feed stops, n is gradually reduced. System gets tired, loses consciousness, and eventually dies. Also in case of cyclotron energies the positive cyclotron energy is proportional to heff so that metabolic energy is needed to generate larger heff and prerequisites for negentropy. In this case one would have very long range negentropic entanglement (NE) whereas dark atoms would correspond to short range NE corresponding to a lower evolutionary level. These entanglements would correspond to gravitational and electromagnetic quantum criticality.
What is remarkable that the scale of atomic binding energies decreases with n only in dimension D=3. In other dimensions it increases and in D=4 one cannot even speak of bound states! This can be easily found by a study of Schrödinger equation for the analog of hydrogen atom in various dimensions. Life based on metabolism seems to make sense only in spatial dimension D=3. Note however that there are also other quantum states than atomic states with different dependence of energy on heff.
3. The analogy of weak form of NMP following from mere adelic physics makes it analogous to second law. Could one consider the purely formal generalization of dE=TdS-.. to dE= -TdN-... where E refers to metabolic energy and N refers to entanglement negentropy? No!: the situation is different. The system is not closed system; N is not the negative of thermodynamical entropy S; and E is the metabolic energy feeded to the system, not the system's internal energy. dE= TdN - ... might however make sense for a system to which metabolic energy is feeded.
Note that the identification of N is still open: N could be identified as N= ∑pNp -S where one has sum of p-adic entanglement negentropies and real entanglement entropy S or as N = ∑pNp. For the first option one would have N=0 for rational entanglement and N>0. for extensions of rationals. Could rational entanglement be interpreted as that associated with dead matter?
4. Bio-catalysis and ATP→ ADP$ process need not require metabolic energy. A transfer of negentropy from nutrients to ATP to acceptor molecule would be in question. Metabolic energy would be needed to reload ADP with negentropy to give ATP by using ATP synthase as a mitochondrial power plant. Metabolites could be carriers of dark atoms of this kind possibly carrying also NE. They could also carry NE associated with the dark cyclotron states as suggested earlier and in this case the value of heff=hgr would be much larger than in the case of dark atoms.
Conditions on bio-catalysis
Bio-catalysis is key mechanism of biology and its extreme efficacy remains to be understood. Enzymes are proteins and ribozymes RNA sequences acting as biocatalysts.
What does catalysis demand?
1. Catalyst and reactants must find each other. How this could happen is very difficult to understand in standard biochemistry in which living matter is seen as soup of biomolecules. I have already already considered the mechanisms making it possible for the reactants to find each other. For instance, in the translation of mRNA to protein tRNA molecules must find their way to mRNA at ribosome. The proposal is that reconnection allowing U-shaped magnetic flux tubes to reconnect to a pair of flux tube connecting mRNA and tRNA molecule and reduction of the value of heff=n× h inducing reduction of the length of magnetic flux tube takes care of this step. This applies also to DNA transcription and DNA replication and bio-chemical reactions in general.
2. Catalyst must provide energy for the reactants (their number is typically two) to overcome the potential wall making the reaction rate very slow for energies around thermal energy. The TGD based model for the hydrino atom having larger binding energy than hydrogen atom claimed by Randell Mills suggests a solution. Some hydrogen atom in catalyst goes from (dark) hydrogen atom state to hydrino state (state with smaller heff/h and liberates the excess binding energy kicking the either reactant over the potential wall so that reaction can process. After the reaction the catalyst returns to the normal state and absorbs the binding energy.
3. In the reaction volume catalyst and reactants must be guided to correct places. The simplest model of catalysis relies on lock-and-key mechanism. The generalized Chladni mechanism forcing the reactants to a two-dimensional closed nodal surface is a natural candidate to consider. There are also additional conditions. For instance, the reactants must have correct orientation. For instance, the reactants must have correct orientation and this could be forced by the interaction with the em field of ME involved with Chladni mechanism.
4. One must have also a coherence of chemical reactions meaning that the reaction can occur in a large volume - say in different cell interiors - simultaneously. Here MB would induce the coherence by using MEs. Chladni mechanism might explain this if there is there is interference of forces caused by periodic standing waves themselves represented as pairs of MEs.
Phase transition reducing the value of heff/h=n as a basic step in bio-catalysis
Hydrogen atom allows also large heff/h=n variants with n>6 with the scale of energy spectrum behaving as (6/n)2 if the n=4 holds true for visible matter. The reduction of n as the flux tube contracts would reduce n and liberate binding energy, which could be used to promote the catalysis.
The notion of high energy phosphate bond is somewhat mysterious concept. There are claims that there is no such bond. I have spent considerable amount of time to ponder this problem. Could phosphate contain (dark) hydrogen atom able to go to the a state with a smaller value of heff/h and liberate the excess binding energy? Could the phosphorylation of acceptor molecule transfer this dark atom associated with the phosphate of ATP to the acceptor molecule? Could the mysterious high energy phosphate bond correspond to the dark atom state. Metabolic energy would be needed to transform ADP to ATP and would generate dark atom.
Could solar light kick atoms into dark states and in this manner store metabolic energy? Could nutrients carry these dark atoms? Could this energy be liberated as the dark atoms return to ordinary states and be used to drive protons against potential gradient through ATP synthase analogous to a turbine of a power plant transforming ADP to ATP and reproducing the dark atom and thus the "high energy phosphate bond" in ATP? Can one see metabolism as transfer of dark atoms? Could possible negentropic entanglement disappear and emerge again after ADP→ATP.
Here it is essential that the energies of the hydrogen atom depend on hbareff=n× h in as hbareffm, m=-2<0. Hydrogen atoms in dimension D have Coulomb potential behaving as 1/rD-2 from Gauss law and the Schrödinger equation predicts for D≠ 4 that the energies satisfy En∝ (heff/h)m, m=2+4/(D-4). For D=4 the formula breaks since in this case the dependence on hbar is not given by power law. m is negative only for D=3 and one has m=-2. There D=3 would be unique dimension in allowing the hydrino-like states making possible bio-catalysis and life in the proposed scenario.
It is also essential that the flux tubes are radial flux tubes in the Coulomb field of charged particle. This makes sense in many-sheeted space-time: electrons would be associated with a pair formed by flux tube and 3-D atom so that only part of electric flux would interact with the electron touching both space-time sheets. This would give the analog of Schrödinger equation in Coulomb potential restricted to the interior of the flux tube. The dimensional analysis for the 1-D Schrödinger equation with Coulomb potential would give also in this case 1/n2 dependence. Same applies to states localized to 2-D sheets with charged ion in the center. This kind of states bring in mind Rydberg states of ordinary atom with large value of n.
For details see the chapter Quantum criticality and dark matter.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD.
Wednesday, February 01, 2017
Further details related to the induction of twistor structure
The notion of twistor lift of TGD (see this and this) has turned out to have powerful implications concerning the understanding of the relationship of TGD to general relativity. The meaning of the twistor lift really has remained somewhat obscure. There are several questions to be answered. What does one mean with twistor space? What does the induction of twistor structure of H=M4× CP2 to that of space-time surface realized as its twistor space mean?
In TGD one replaces imbedding space H=M4× CP2 with the product T= T(M4)× T(CP2) of their 6-D twistor spaces, and calls T(H) the twistor space of H. For CP2 the twistor space is the flag manifold T(CP2)=SU(3)/U(1)× U(1) consisting of all possible choices of quantization axis of color isospin and hypercharge.
1. The basic idea is to generalize Penrose's twistor program by lifting the dynamics of space-time surfaces as preferred extremals of Kähler action to those of 6-D Kähler action in twistor space T(H). The conjecture is that field equations reduce to the condition that the twistor structure of space-time surface as 4-manifold is the twistor structure induced from T(H).
Induction requires that dimensional reduction occurs effectively eliminating twistor fiber S2 (X4) from the dynamics. Space-time surfaces would be preferred extremals of 4-D Kähler action plus volume term having interpretation in terms of cosmological constant. Twistor lift would be more than an mere alternative formulation of TGD.
2. The reduction would take place as follows. The 6-D twistor space T(X4) has S2 as fiber and can be expressed locally as a Cartesian product of 4-D region of space-time and of S2. The signature of the induced metric of S2 should be space-like or time-like depending on whether the space-time region is Euclidian or Minkowskian. This suggests that the twistor sphere of M4 is time-like as also standard picture suggests.
3. Twistor structure of space-time surface is induced to the allowed 6-D surfaces of T(H), which as twistor spaces T(X4) must have fiber space structure with S2 as fiber and space-time surface X4 as base. The Kähler form of T(H) expressible as a direct sum
J(T(H)= J(T(M4))⊕ J(T(CP2)
induces as its projection the analog of Kähler form in the region of T(X4) considered.
There are physical motivations (CP breaking, matter antimatter symmetry, the well-definedness of em charge) to consider the possibility that also M4 has a non-trivial symplectic/Kähler form of M4 obtained as a generalization of ordinary symplectic/Kähler form (see this). This requires the decomposition M4=M2× E2 such that M2 has hypercomplex structure and E2 complex structures.
This decomposition might be even local with the tangent spaces M2(x) and E2(x) integrating to locally orthogonal 2-surfaces. These decomposition would define what I have called Hamilton-Jacobi structure (see this). This would give rise to a moduli space of M4 Kähler forms allowing besides covariantly constant self-dual Kähler forms with decomposition (m0,m3) and (m1, m2) also more general self-dual closed Kähler forms assignable to integrable local decompositions. One example is spherically symmetric stationary self-dual Kähler form corresponding to the decomposition (m0,rM) and (θ,φ) suggested by the need to get spherically symmetric minimal surface solutions of field equations. Also the decomposition of Robertson-Walker coordinates to (a,r) and (θ,π) assignable to light-cone M4+ can be considered.
The moduli space giving rise to the decomposition of WCW to sectors would be finite-dimensional if the integrable 2-surfaces defined by the decompositions correspond to orbits of subgroups of the isometry group of M4 or CD. This would allow planes of M4, and radial half-planes and spheres of M4 in spherical Minkowski coordinates and of M4+ in Robertson-Walker coordinates. These decomposition could relate to the choices of measured quantum numbers inducing symmetry breaking to the subgroups in question. These choices would chose a sector of WCW (see this) and would define quantum counterpart for a choice of quantization axes as distinct from ordinary state function reduction with chosen quantization axes.
4. The induced Kähler form of S2 fiber of T(X4) is assumed to reduce to the sum of the induced Kähler forms from S2 fibers of T(M4) and T(CP2). This requires that the projections of the Kähler forms of M4 and CP2 to S2(X4) are trivial. Also the induced metric is assumed to be direct sum and similar conditions holds true.These conditions are analogous to those occurring in dimensional reduction.
Denote the radii of the spheres associated with M4 and CP2 as RP=klP and R and the ratio RP/R by ε. Both the Kähler form and metric are proportional to Rp2 resp. R2 and satisfy the defining condition JkrgrsJsl= -gkl. This condition is assumed to be true also for the induced Kähler form of J(S2(X4).
This is the general description. How many solutions to these conditions are obtained? It seems that there are essentiablly 3 solutions. The projection of the twistor space of space-time surface to the twistor sphere of either M4 or CP2 is trivial and the solution in which it is trivial to both and twistor spheres correspond to each other by a one-to-one isometry (see this).
For details see the chapter How the hierarchy of Planck constants might relate to the almost vacuum degeneracy for twistor lift of TGD? or the article Some questions related to the twistor lift of TGD.
For a summary of earlier postings see Latest progress in TGD.
Articles and other material related to TGD. |
fa099b40d6d142d0 | Joe Lykken on “Some good/bad news about string theory”
Joe Lykken, “String Theory for Physicists,” XXXIII SLAC Summer Institute, 2005, Lecture 1 [PDF]; Lecture 2 [PDF]; and Lecture 3 [PDF].
“Some good/bad news about string theory:
Good: String theory is a consistent theory of quantum gravity. Bad: It’s really a generator of an infinite number of mostly disconnected theories of quantum gravity, each around a different ground state. No background independent truely off-shell formulation of string theory is known (yet).
Good: String theory is unique, i.e. there is only one distinct consistent theory of “fundamental” strings. Bad: It has an infinite number of continously connected ground states plus a google of discrete ones. There appears to be no vacuum selection principle, other than the stability of supersymmetric vacua, which gives the wrong answer.
Good: String theory gives you chiral gauge theories, with big gauge groups, for free and complicated flavor structure at low energies is mapped into the geometry of extra dimensions. Bad: Doesn’t like to give the standard model as the low energy theory. A “typical” string compactification is either much simpler (with more SUSY and bigger gauge groups) or much more complicated (lot’s of extra exotic matter, extra U(1) gauge groups, etc.).
Good: String theory predicts supersymmetry and extra dimensions of space. Bad: It’s happy to hide the both up at the Planck scale.
Good: No length or energy scales are put in by hand; all scales should be determined diynamically. Bad: Appears to be too many (hundreds!) scalar fields (moduli) with too much SUSY to get determined dynamically; may be forced to appeal to cosmic initial conditions (the Landscape).
Good: String theory gives a microphysical description of (at least some) black holes, resolves their singularities. Bad: Doesn’t seen to resolve the singularity of the Big Bang (good for inflation, though).
Good: Lots of powerful dualities including weak ↔ strong coupling dualities and short ↔ long distance dualities. Bad: Can’t tell what are the “fundamental” degrees of freedom. String theory not necessarily a theory of strings.
Good: Unification of all the forces is almost for free, may need an (interesting) extra dimensional assist. Bad: In our most realistic string constructions so far, SU(3)C, SU(2)W, and U(1)Y have essentially nothing to do with each other: related to different features of complicated D-brane setups.
Good: AdS/CFT duality shows that 10-dimensional string theory in a certain background is equivalent to a 4-dimensional gauge theory!! Use this e.g. to show that RHIC QCD physics maps onto quantum gravity/black holes. Bad: Adds more confusion: can’t tell an extra dimension apart from technicolor.
Good: We are starting to use string theory to learn tricks for perturbative QCD, understanding the QCD string, etc. Bad: The QCD community was already doing fine, thank you.
That’s all folks!”
Posted in Mathematics, Physics, Science, String Theory | Tagged | Leave a comment
Dan Hooper on Light WIMPs
“The thermal abundance (“WIMP Miracle”) argument works roughly equally well for WIMPs with masses between ~1 GeV and several TeV, but historically, physicists have focused on ~40 GeV to ~1 TeV WIMPs, and papers have been written, analyses have been carried out, and experiments have been designed (and funded) with this bias in mind. But, I know of no compelling argument for why dark matter should not consist of ~1-20 GeV particles.” Dan Hooper (Fermilab/University of Chicago) vindicates “Light WIMPs!,” at TeV Particle Astrophysics Workshop, August 2011.
The body of evidence is quite suggestive: “DAMA/LIBRA, CoGeNT, and CRESST have each reported signals which are inconsistent with known backgrounds, and (roughly) consistent with the elastic scattering of ~5-10 GeV dark matter particles; the spectrum of gamma rays from the region surrounding the Galactic Center peaks at a few GeV, consistent with a ~7-10 GeV dark matter particle annihilating largely to leptons, with a cross section on the order of that predicted by relic abundance considerations.”
However, “the case is not yet incontrovertible.”
String theory and mathematical fertility
“String theory dominates the research landscape of quantum gravity physics (despite any direct experimental evidence) due to its mathematical fertility. String theory has generated many surprising, useful, and well-confirmed mathematical ‘predictions’ made on the basis of general physical principles entering into string theory. The success of the mathematical predictions are then seen as evidence for the framework that generated them. Smolin argues that if mathematical fertility could be an indicator of truth, then we ought to take the success of knot theory as evidence for the idea that atoms are indeed knotted bits of ether. Hence, we have an apparent reductio ad absurdum of the idea that I am arguing for in this paper, that mathematical fertility might lead us to believe more strongly in a theory. But the fact that Kelvin’s theory was eventually disconfirmed does not mean that it was a bad theory—after all, it was discussed and studied as a serious theory for some 20 years. It was precisely the fact that it was taken seriously as a physical theory that led to the development of knot theory. The physics of knots forms an integral part of modern physics, especially in condensed matter physics, quantum field theory, and quantum gravity.” Dean Rickles, “Mirror Symmetry and Other Miracles in Superstring Theory,” Found. Phys. 2011.
“String theory has not yet been able to make contact with experiments that would give us strong reasons to accept it as the ‘sure winner’ in the race to construct a theory of quantum gravity. However, though experiment can often function as a decisive arbiter in situations where there are several competing theories, there are many more theoretical virtues that play a role in our evaluation of theories. Taking these extraexperimental factors into account, string theory is very virtuous indeed, it is arguably the most mathematically fertile theory of the past century or so. I would go further and say that no direct experiment is likely to ever come about (other than ones that could be explained by multiple approaches), so we can assume that non-experimental factors will have to be relied upon more strongly in our assessments of future research in fundamental physics.”
Posted in Mathematics, Physics, Science, String Theory | Tagged , | Leave a comment
On the nature of time in string theory
The journal Foundations of Physics commemorates “Forty Years of String Theory.” Vijay Balasubramanian (University of Pennsylvania) steps back and ask what we do not understand about time. What is time? Within the broader quantum gravity community outside string theory there has also been considerable thinking about time. Traditionally, in the study of quantum gravity the “problem of time” arises because the Schrödinger equation when promoted to the diffeomorphism invariant context of gravity, becomes the Wheeler-de Witt equation which simply says nothing about time evolution. This is sometimes interpreted as saying that saying that in a quantum diffeomorphism-invariant universe time is meaningless. Vijay Balasubramanian presents nine questions and several lines of attack in string theory in his paper “What we don’t know about time,” ArXiv, 14 Jul 2011. Let me summarizes his ideas.
Why is there an arrow of time? A common idea is that the arrow of time is cosmologically defined by the macroscopic increase of entropy (the second law of thermodynamics). But this raises the associated question of why the universe starts in a low entropy state. This approach also suggests that the notion of time is inherently connected to the coarse graining of an underlying quantum gravitational configuration space.
Why is there only one time? Geometrically, time is different from space because the geometry of spacetime is locally Minkowski (Lorentzian metric signature (1, 3)), not Euclidean (metric signature (0, 4)). From a geometrical point of view we could equally well imagine a signature (2, 2), with two times, which is more symmetric between space and time. In the context of string theory with its many extra dimensions one can ask why we seem to have extra spatial dimensions, not temporal dimensions.
Is there a connection between the existence of a time, and the quantumness of the universe? The difference between time and space is somehow implicated in the difference between quantum mechanics with its characteristic features of quantum interference and entanglement, and classical statistical physics which lacks these features. This kind of difference appears in nonrelativistic quantum mechanics, in quantum field theory, and even in string theory.
Could the real, Lorentzian structure of conventional spacetime be simply a convenient way of summarizing analytic information about an underlying complexified geometry? Physical quantities seem to be described by analytic functions of space and time in both quantum field theory and string theory.
How can singularities localized in time be resolved in string theory or some other quantum theory of gravity? A prediction of General Relativity is that spacetime singularities exist, either timelike (i.e. localized in space), lightlike (i.e. localized on a null curve), or spacelike (i.e. localized in time). One of the goals of a quantum theory of gravity such as string theory is to resolve such singularities.
Why is the area of a horizon, a causal construct, related to entropy, a thermodynamic concept, and can this entropy be given a statistical explanation for general horizons? Semiclassical analyses of quantum mechanics in spacetimes containing horizons like black holes and accelerating geometries such as de Sitter space suggest that inertial observers perceive the horizon as having an entropy proportional to area and a temperature proportional to the surface gravity at the horizon. Neither is there any explanation of why entropy becomes associated to a geometrical construct – the area of a horizon.
How precisely is physics beyond a black hole horizon encoded in a unitary description of spacetime? The “information loss paradox” for black holes is due to the non-unitary semiclassical evolution of quantum states in Hawking radiation. The apparent loss of unitarity can be traced ultimately to the causal disconnection of the region behind the horizon. A solution is required since there is simply no room in the full quantum theory for information loss in black holes.
Can time be emergent from the dynamics of a timeless theory? In the AdS/CFT correspondence, string theory in a (d+1)-dimensional, asymptotically Anti-de Sitter (AdS) spacetime is exactly equivalent to a d-dimensional quantum field theory defined on the timelike boundary of such a universe. Thus, the radial dimension of AdS spacetime (as well as any additional compact dimensions of the bulk string theory) must be regarded as somehow “emergent” from the dynamics of the d-dimensional field theory. The field theory contains a time and the emergent gravitational theory inherits its time directly from the field theory.
Are time and space concepts that only become effective in “phases” where the primordial degrees of freedom self-organize with appropriate relations of conditional dependence and entanglement? The spacetime and its metric are generally be thought of as a coarse-grained description of some underlying degrees of freedom which may, or may not, be organized with the proximity and continuity relations associated to smooth spacetime. The spacetime can be viewed as an emergent description of relations of conditional dependence of underlying fundamental variables.
If you have enjoyed the questions, please refer to the paper “What we don’t know about time” for possible lines of research in order to obtain the answers in string theory.
Lisi and Weatherall in Scientific American: “A Geometric Theory of Everything”
“In 2007 physicist A. Garrett Lisi wrote the most talked about theoretical physics paper of the year. He argues that the geometric framework of modern quantum physics can be extended to incorporate Einstein’s theory, leading to a long-sought unification of physics” based on a geometrical object referred to as the exceptional Lie Group E8. Lisi, the surfer physicist, has a Midas touch in Mathematical Physics. Everybody talks about his great achievements, even if they are criticized by the mainstream. In the December 2010 issue of Scientific American appears an 8-page article entitled “A Geometric Theory of Everything,” written with James Owen Weatherall. Let us extract some paragraphs from the paper.
“The current best theory of nongravitational forces—the electromagnetic, weak and strong nuclear force—was largely completed by the 1970s and has become familiar as the Standard Model of particle physics. Mathematically, the theory describes theseforces and particles as the dynamics of elegant geometric objects called Lie groups and fiber bundles. Over the years physicists have proposed various Grand Unified Theories, or GUTs, in which a single geometric object would explain all these forces, but no one yet knows which, if any, of these theories is true. In Lisi’s theory a single geometric object unifies all forces and matter into a single geometric object.
The main geometric idea underlying the Standard Model is that every point in our spacetime has shapes attached to it, called fibers, each corresponding to a different kind of particle. The entire geometric object is called a fiber bundle. The fibers are in internal spaces corresponding to particles’ properties. This idea was introduced by Hermann Weyl in 1918 for the unification of gravity and electromagnetism. The electric and magnetic fields existing everywhere in our space are the result of fibers with the simplest shape: the circle, called U(1) by physicists, the simplest example of a Lie group. The fiber bundle of electromagnetism consists of circles attached to every point of spacetime. An electromagnetic wave is the undulation of circles over spacetime. Photons and electrons have different fiber bundles over spacetime. The fibers of electrons wrap around the circular fibers of electromagnetism like threads around a screw. Because twists must meet around the circle, these charges are integer multiples of some standard unit of electric charge.
Physicists apply these same principles to the weak and strong nuclear forces. Each of these forces has its own kind of charge and its own propagating particles. They are described by more complicated fibers, made up not just of a single circle but of sets of intersecting circles, interacting with themselves and with matter according to their twists. The weak force is associated with a three-dimensional Lie group fiber called SU(2). Its shape has three symmetry generators, corresponding to the three weak-force boson particles: W+, W and W3. Matter particles, fermions, come in two varieties, related to how their spin aligns with their momentum: left-handed and right-handed. Only the left-handed fermions have weak charges, with the left-handed up quark and neutrino having weak charge +1/2 and the left-handed down quark and electron having weak charge –1/2. For antiparticles, this is reversed. Our universe is not left-right symmetrical, one of many mysteries a unified theory seeks to explain.
Electroweak force unifies the weak force with electromagnetism by combining the SU(2) fiber with a U(1) circle. This circle is not the same as the electromagnetic one; it represents a precursor to electromagnetism known as the hypercharge force, with particles twisting around it according to their hypercharge, labeled Y. The W3 circles combine with the hypercharge circles to form a two-dimensional torus. The fibers of particles known as Higgs bosons twist around the electroweak Lie group and determine a particular set of circles, breaking the symmetry. The Higgs does not twist around these circles, which then correspond to the massless photon of electromagnetism. Perpendicular to these circles are another set that should correspond to another particle, which the developers of electroweak theory called the Z boson. The fibers of the Higgs bosons twist around the circles of the Z boson, as well as the circles of the Wand W, making all three particles massive. Experimental physicists discovered the Z in 1973, vindicating the theory and demonstrating how geometric principles have real-world consequences.
The strong nuclear force that binds quarks into atomic nuclei corresponds geometrically to an even larger Lie group, SU(3). The SU(3) fiber is an eight-dimensional internal space composed of eight sets of circles twisting around one another in an intricate pattern, producing interactions among eight kinds of photonlike particles called gluons on account of how they “glue” nuclei together. This fiber shape can be broken into comprehensible pieces. Embedded within it is a torus formed by two sets of untwisted circles, corresponding to two generators, g3 and g8. The remaining six gluon generators twist around this torus and their resulting g3 and g8 charges form a hexagon in the weight diagram. The quark fibers twist around this SU(3) Lie group, their strong charges forming a triangle in the weight diagram. These quarks are whimsically labeled with three colors: red, green and blue. A collection of matter fibers forming a complete pattern, such as three quarks in a triangle, is called a representation of the Lie group. The colorful description of the strong interactions is known as the theory of quantum chromodynamics.
Together, quantum chromodynamics and the electroweak model make up the Standard Model of particle physics, with a Lie group formed by combining SU(3), SU(2) and U(1), as well as matter in several representations. The Standard Model is a great success, but it presents several puzzles: Why does nature use this combination of Lie groups? Why do these matter fibers exist? Why do the Higgs bosons exist? Why is the weak mixing angle what it is? How is gravity included? The quarks, electrons and neutrinos that constitute common matter are called the first generation of fermions; they have second- and third-generation doppelgängers with identical charges but much larger masses. Why is that? And what are cosmic dark matter and dark energy? A unified theory should be able to provide answers to these and other questions.
A Grand Unified Theory use a large Lie group with a single fiber encompassing both the electroweak and strong forces. The first attempt at such a theory was proposed in 1973, by Howard Georgi and Sheldon Glashow. They found that the combined Lie group of the Standard Model fits snugly into the Lie group SU(5) as a subgroup. This SU(5) GUT made some distinctive predictions. First, fermions should have exactly the hypercharges that they do. Second, the weak mixing angle should be 38 degrees, in fair agreement with experiments. And finally, in addition to the 12 Standard Model bosons, there are 12 new force particles in SU(5), called X bosons. It was the X bosons that got the theory into trouble. These new particles would allow protons to decay into lighter particles. In impressive experiments, including the observation of 50,000 tons of water in a converted Japanese mine, the predicted proton decay was not seen. Thus, physicists have ruled out this theory.
A related Grand Unified Theory, developed around the same time, is based on the Lie group Spin(10). It produces the same hypercharges and weak mixing angle as SU(5) and also predicts the existence of a new force, very similar to the weak force. This new “weaker” force, mediated by relatives of the weak-force bosons called W’+, W’ and W’3, interacts with right-handed fermions, restoring leftright symmetry to the universe at short distances. Although this theory predicts an abundance of X bosons—a full 30 of them—it also indicates that proton decay would occur at a lower rate than for the SU(5) theory. So the theory remains viable. The Spin(10) Lie group with its 45 bosons, along with its representations of 16 fermions and their 16 antifermions, are in fact all parts of a single Lie group, a special one known as the exceptional
Lie group E6.
The classification of all the Lie groups found the existence of five exceptional ones that stand out: G2, F4, E6, E7 and E8. The fact that the bosons and fermions of Spin(10) and the Standard Model tightly fit the structure of E6, with its 78 generators, is remarkable. It provokes a radical thought. Up until now, physicists have thought of bosons and fermions as completely different. Bosons are parts of Lie group force fibers, and fermions are different kinds of fibers, twisting around the Lie groups. But what if bosons and fermions are parts of a single fiber? That is what the embedding of the Spin(10) GUT in E6 suggests. The structure of E6 includes both types of particles. In a radical unification of forces and matter, bosons and fermions can be combined as parts of a superconnection field. But E6 does not include the Higgs bosons or gravity.
A Lie group formulation of gravity uses the group Spin(1,3) for rotations in three spaces and one time direction. Now it is just a matter of putting the pieces together. With gravity described by Spin(1,3) and the favored Grand Unified Theory based on Spin(10), it is natural to combine them using a single Lie group, Spin(11,3), yielding a Gravitational Grand Unified Theory—as introduced last year by Roberto Percacci of the International School for Advanced Studies in Trieste and Fabrizio Nesti of the University of Ferrara in Italy. It brings us close to a full Theory of Everything. The Spin(11,3) Lie group allows for blocks of 64 fermions and, amazingly, predicts their spin, electroweak and strong chargesperfectly. It also automatically includes a set of Higgs bosons and the gravitational frame; in fact, they are unified as “frame-Higgs” generators in Spin(11,3). The curvature of the Spin(11,3) fiber bundle correctly describes the dynamics of gravity, the other forces and the Higgs. It even includes a cosmological constant that explains cosmic dark energy. Everything falls into place.
Skeptics objected that the Spin(11,3) theory should be impossible. It appears to violate a theorem in particle physics, the Coleman-Mandula theorem, which forbids combining gravity with the other forces in a single Lie group. But the theorem has an important loophole: it applies only when spacetime exists. In the Spin(11,3) theory (and in E8 theory), gravity is unified with the other forces only before the full Lie group symmetry is broken, and when that is true, spacetime does not yet exist. Our universe begins when the symmetry breaks: the frame-Higgs field becomes nonzero, singling out a specific direction in the unifying Lie group. At this instant, gravity becomes an independent force, and spacetime comes into existence with a bang. Thus, the theorem is always satisfied. The dawn of time was the breaking of perfect symmetry.
Lisi’s theory uses the most beautiful structure in all of mathematics, the largest simple exceptional Lie group, E8. Just as E6 contains the structure of the Spin(10) Grand Unified Theory, with its 16 fermions, the E8 Lie group contains the structure of the Spin(11,3) Gravitational Grand Unified Theory, with its 64 Standard Model fermions, including their spins. In this way, gravity and the other known forces, the Higgs, and one generation of Standard Model fermions are all parts of the unified superconnection field of an E8 fiber bundle. The E8 Lie group, with 248 generators, has a wonderfully intricate structure. In addition to gravity and the Standard Model particles, E8 includes W’, Z’ and X bosons, a rich set of Higgs bosons, novel particles called mirror fermions, and axions—a cosmic dark matter candidate.
Even more intriguing is a symmetry of E8 called triality. Using triality, the 64 generators of one generation of Standard Model fermions can be related to two other blocks of 64 generators. These three blocks might intermix to reproduce the three generations of known fermions. In this way, the physical universe could emerge naturally from a mathematical structure without peer. The theory tells us what Higgs bosons are, how gravity and the other forces emerge from symmetry-breaking, why fermions exist with the spins and charges they have, and why all these particles interact as they do.
Although Lisi’s theory continues to be promising, much work remains to be done. We need to figure out how three generations of fermions unfold, how they mix and interact with the Higgs to get their masses, and exactly how E8 theory works within the context of quantum theory. If E8 theory is correct, it is likely the Large Hadron Collider will detect some of its predicted particles. If, on the other hand, the collider detects new particles that do not fit E8’s pattern, that could be a fatal blow for the theory. In either case, any particles that experimentalists uncover will lead us toward some geometric structure at the heart of nature. And if the structure of the universe at the tiny scales of elementary particles does turn out to be described by E8, with its 248 sets of circles wrapping around one another in an exquisite pattern, twisting and dancing over spacetime in all possible ways, then we will have achieved a complete unification and have the satisfaction of knowing we live in an exceptionally beautiful universe.
Lisi’s papers on ArXiv.
Posted in Mathematics, Particle Physics, Physics, Science, Theoretical Proposal, Uncategorized | Tagged | 1 Comment
Gustafsson in PPC-CERN: “Fermi Gamma-ray Space Telescope Observations of the Galactic Center”
A forthcoming paper of the Fermi-LAT Collaboration will describe method and results yielding to the left plot (the right one is widely known). A map of the galactic center after 2 years of Fermi operation by the Large Area Telescope (LAT) γ-ray spectrum above 1 GeV. The announcement appears in Michael Gustafsson (Padova University, On behalf of the Fermi Collaboration), “Fermi Gamma-ray Space Telescope: Gamma-ray Observations and their Dark Matter Interpretations,” PPC 2011 @ CERN, June 14, 2011.
Posted in Astronomy, News, Science | Tagged , , | Leave a comment
Serpico at PPC-CERN: “Theoretical aspects of dark matter indirect detection”
“Dark Matter (DM) was already discovered indirectly: via gravity. But gravity is “universal” and does not permit particle identification: a discovery via electromagnetic, strong or weak probes is needed. The LHC at CERN was designed to study the electroweak (EW) scale, however there is no astrophysical or cosmological evidence whatsoever for the EW scale being the right one for explaining the DM problem. In fact, there is no evidence that the astrophysical DM is made of particles. The logic has always been the opposite: since the EW scale can be motivated by particle physics, then it might offer “natural” candidates for the DM problem while being accessible to a multi-disciplinary strategy. In the “golden age” for direct searches and colliders, it’s advisable to go back to the “standard practice”: experiments must guide us to Beyond Standard Model (BSM) physics, following the good old pipeline: Particle Physics progress → Theory Framework → Prediction for indirect, allowing a priori searches.” Extracts from Pasquale D. Serpico, “Dark Matter Indirect Detection (theoretical aspects),” PPC 2011 CERN -14 June 2011.
Posted in Astronomy, LHC at CERN, Particle Physics, Science | Tagged | Leave a comment |
c09731f776d27635 | Peter Barker
Myungshik Kim
Spin-probed matter-wave interformetry of levitated diamond nano-particles
Quantum mechanics is widely regarded as our most effective theory to date. Its accuracy and the insight it offers us are unprecedented and stunning. However, there remain serious problems with QM, and amongst them is the question of where it should break down and give way to classical mechanics. When we measure a quantum state we transition from unitary evolution governed by the Schrödinger equation to a probabilistic final outcome. However, what constitutes this measurement is not properly defined, other than a rough idea of scale. Unless we adopt a 'Many Worlds' interpretation, in which there is no wavefunction collapse; we must make a subjective distinction between a quantum system and a measurement device capable of collapsing superpositions into definite states.
Collapse theories are one possible resolution to this problem, which is known as the measurement problem. By modifying the Schrödinger equation they promise a new, general mechanics; one which goes over to quantum mechanics in the limit of small masses, and goes over to classical mechanics in the limit of larger objects. Though various attempts have been made to resolve the measurement problem over the years, collapse theories are remarkable in that they are testable.
My work focuses on ways of testing collapse theories using optomechanical systems. Currently I am working on a scheme to use a levitated nanosphere, trapped inside an optical cavity, to probe the signature effects of the postulated noise field causing collapse. If such a field exists, it will interact with the sphere, acting on it like a Brownian noise source. In turn, this action on the position of the sphere will affect the light entering and leaving the cavity, and it is in the profile of the exiting light that we hope to look for evidence of collapse. |
9d7b4ca66b746574 | måndag 31 mars 2014
Planck's Constant = Human Convention Standard Frequency vs Electronvolt
The recent posts on the photoelectric effect exhibits Planck's constant $h$ as a conversion standard between the units of light frequency $\nu$ in $Hz\, = 1/s$ as periods per second and electronvolt ($eV$), expressed in Einstein's law of photoelectricity:
• $h\times (\nu -\nu_0) = eU$,
where $\nu_0$ is smallest frequency producing a photoelectric current, $e$ is the charge of an electron and $U$ the stopping potential in Volts $V$ for which the current is brought to zero for $\nu > \nu_0$. Einstein obtained, referring to Lenard's 1902 experiment with $\nu -\nu_0 = 1.03\times 10^{15}\, Hz$ corresponding to the ultraviolet limit of the solar spectrum and $U = 4.3\, V$
• $h = 4.17\times 10^{-15} eVs$
to be compared with the reference value $4.135667516(91)\times 10^{-15}\, eV$ used in Planck's radiation law. We see that here $h$ occurs as a conversion standard between Hertz $Hz$ and electronvolt $eV$ with
• $1\, Hz = 4.17\times 10^{-15}\, eV$
To connect to quantum mechanics, we recall that Schrödinger's equation is normalized with $h$ so that the first ionization energy of Hydrogen at frequency $\nu = 3.3\times 10^{15}\, Hz$ equals $13.6\, eV$, to be compared with $3.3\times 4.17 = 13.76\, eV$ corresponding to Lenard's photoelectric experiment.
We understand that Planck's constant $h$ can be seen as a conversion standard between light energy measured by frequency and electron energy measured in electronvolts. The value of $h$ can then be determined by photoelectricity and thereafter calibrated into Schrödinger's equation to fit with ionization energies as well as into Planck's law as a parameter in the high-frequency cut-off (without a very precise value). The universal character of $h$ as a smallest unit of action is then revealed to simply be a human convention standard without physical meaning. What a disappointment!
• Planck's constant was introduced as a fundamental scale in the early history of quantum mechanics. We find a modern approach where Planck's constant is absent: it is unobservable except as a constant of human convention.
Finally: It is natural to view frequency $\nu$ as a measure of energy per wavelength, since radiance as energy per unit of time scales with $\nu\times\nu$ in accordance with Planck's law, which can be viewed as $\nu$ wavelengths each of energy $\nu$ passing a specific location per unit of time. We thus expect to find a linear relation between frequency and electronvolt as two energy scales: If 1 € (Euro) is equal to 9 Skr (Swedish Crowns), then 10 € is equal to 90 Skr.
söndag 30 mars 2014
Photoelectricity: Millikan vs Einstein
The American physicist Robert Millikan received the Nobel Prize in 1923 for (i) experimental determination of the charge $e$ of an electron and (ii) experimental verification of Einstein's law of photoelectricity awarded the 1921 Prize.
Millikan started out his experiments on photoelectricity with the objective of disproving Einstein's law and in particular the underlying idea of light quanta. To his disappointment Millikan found that according to his experiments Einstein's law in fact was valid, but he resisted by questioning the conception of light-quanta even in his Nobel lecture:
• In view of all these methods and experiments the general validity of Einstein’s equation is, I think, now universally conceded, and to that extent the reality of Einstein’s light-quanta may be considered as experimentally established.
• But the conception of localized light-quanta out of which Einstein got his equation must still be regarded as far from being established.
• Whether the mechanism of interaction between ether waves and electrons has its seat in the unknown conditions and laws existing within the atom, or is to be looked for primarily in the essentially corpuscular Thomson-Planck-Einstein conception as to the nature of radiant energy is the all-absorbing uncertainty upon the frontiers of modern Physics.
Millikan's experiments consisted in subjecting a metallic surface to light of different frequencies $\nu$ and measuring the resulting photovoltic current determining a smallest frequency $\nu_0$ producing a current and (negative) stopping potential required to bring the current to zero for frequencies $\nu >\nu_0$. Millikan thus measured $\nu_0$ and $V$ for different frequencies $\nu > \nu_0$ and found a linear relationship between $\nu -\nu_0$ and $V$, which he expressed as
• $\frac{h}{e}(\nu -\nu_0)= V$,
in terms of the charge $e$ of an electron which he had already determined experimentally, and the constant $h$ which he determined to have the value $6.57\times 10^{-34}$. The observed linear relation between $\nu -\nu_0$ and $V$ could then be expressed as
• $h\nu = h\nu_0 +eV$
which Millikan had to admit was nothing but Einstein's law with $h$ representing Planck's constant.
But Millikan could argue that, after all, the only thing he had done was to establish a macroscopic linear relationship between $\nu -\nu_0$ and $V$, which in itself did not give undeniable evidence of the existence of microscopic light-quanta. What Millikan did was to measure the current for different potentials of the plus pole receiving the emitted electrons under different exposure to light and thereby discovered a linear relationship between frequency $\nu -\nu_0$ and stopping potential $V$ independent of the intensity of the light and properties of the metallic surface.
By focussing on frequency and stopping potential Millikan could make his experiment independent of the intensity of incoming light and of the metallic surface, and thus capture a conversion between light energy and electron energy of general significance.
But why then should stopping potential $V$ scale with frequency $\nu - \nu_0$, or $eV$ scale with frequency $h(\nu - \nu_0)$? Based on the analysis on Computational Blackbody Radiation the answer would be that $h\nu$ represents a threshold energy for emission of radiation in Planck's radiation law and $eV$ represents a threshold energy for emission of electrons, none of which would demand light quanta.
lördag 29 mars 2014
Einstein: Genius by Definition of Law of Photoelectricity
Einstein opened to the new brave world of modern physics in two articles in his 1905 annus mirabilis, one giving humanity a breath-taking entirely new view on space and time through the special theory of relativity, and the other on photoelectricity introducing light quanta carried by light particles later named photons preparing the development of quantum mechanics.
Einstein's science is difficult to understand because it is never clear if the basic postulates of his theories are definitions without physics content, that is tautologies which are true by semantic construction, or if they are statements about physics which may be true or not true depending on realities.
The special theory of relativity is based on a postulate that the speed of light (in vacuum) is the same for all observers independent of motion with constant velocity. With the new definition of length scale of a lightsecond to be used by all observers, the speed of light for all observers is equal to one lightsecond per second and thus simply a definition or agreement between different observers.
Yet physicists by training firmly believe that the speed light is constant as physical fact behind the definition. For Einstein and all modern physicists following in his footsteps, definition and statement about physics come together into one postulate of relativity which can flip back and forth between definition and statement about physics and thereby ruin any attempt to bring clarity in a scientific discussion. Einstein played this game masterfully by formulating special relativity as a prescription or definition or dictate that different observers are to coordinate observations by Lorentz transformation. A dictate cannot be false. It can only be disastrous.
Let us now check if Einstein's law of photoelectricity, which gave him the 1921 Nobel Prize in Physics is also a definition and thus empty of physics content. The law takes the form
• $h(\nu -\nu_0) =eV$,
which expresses an energy balance for one electron of charge $e$ being ejected from a certain metallic surface by incoming light of frequency $\nu$ with $\nu_0$ the smallest frequency for which any electrons are ejected and $V$ is the potential required to stop a current of electrons for $\nu > \nu_0$. The relation can be written
• $h\nu = h\nu_0 + eV$
expressing a balance of incoming energy $h\nu$ as release energy $h\nu_0$ and electron (kinetic) energy after ejection $eV$ measured by the stopping potential $V$.
There is one more parameter in the energy balance and that is $h$, which is Planck's constant.
Measuring the stopping potential $V$ for light of different frequencies $\nu$ including determining $\nu_0$ and finding a linear relationship between $\nu -\nu_0$ and $V$, would then allow the determination of a value of $h$ making the law true. This shows to work and is in fact a standard way of experimentally determining the value of Planck's constant $h$.
In this perspective Einstein's law of photoelectricty comes out as a definition through which the value of $h$ is determined, which effectively corresponds to a conversion standard from the dimension of Joule of $h\nu$ as light energy to the dimension of electronvolt of $eV$ as electron energy, which says nothing about the existence of discrete packets of energy or light quanta.
The physics enters only in the assumed linear relation between $\nu$ and $V$. From the derivation of Planck's law on Computational Blackbody Radiation it is clear that $h\nu$ in the high-frequency cut-off factor $\frac{\alpha}{\exp(\alpha )-1}$ with $\alpha=\frac{h\nu}{kT}$ in Planck's law, acts as a threshold value, that is as a certain quantity $h\nu$ of energy per atomic energy $kT$ required for emission of radiation. This strongly suggests a linear relationship between $\nu$ and $V$ since $V$ also serves as a threshold.
We thus conclude that the general form of Einstein's law of photoelectricity as a linear relationship in an energy balance for each electron between the frequency of incoming light $\nu$ and the stopping potential $V$, naturally comes out from the role of $h\nu$ as threshold value modulo $kT$.
Once the linear relationship is postulated as physics, the value of $h$ to make the law fit with observation is a matter of definition as effectively determining energy conversion between light energy as $h\nu$ in Joule and electron energy as $eV$ in electronvolt. The quantity $h\nu$ is then a threshold value and not a discrete packet of energy and $\frac{h}{e}$ sets an exchange rate between two different currencies of frequency and stopping potential.
In other words, Einstein received the Nobel Prize for formulating a definition almost empty of physics content. It shows that the concept of a photon as a light particle carrying the discrete packet of energy $h\nu$ is also a definition empty of physics content.
Another aspect emerging from the above analysis is an expected (and observed) temperature dependence of photoelectricity, which is not expressed in Einstein's law. The release energy is expected to depend on temperature and there is no reason to expect that the stopping potential should compensate so as to make determination of $h$ by photoelectricity independent of temperature. What is needed is then an extension of Einstein's law to include dependence on temperature.
It remains to sort out the appearance of the parameter $h$ (determined by photoelectricity) in Planck's radiation law and in Schrödinger's equation, which has already been touched in a previous post, but will be addressed in more detail in an upcoming post.
The advantage of using definitions as postulates about physics is that you can be absolutely sure that your physics is correct (but empty). This aspect came out when Einstein confronted with an observation claimed to contradict special relativity, with absolute confidence could say that the observation was wrong:
• If the facts don't fit the theory, change the facts.
• Whether you can observe a thing or not depends on the theory which you use.
• It is the theory which decides what we can observe.
• What I'm really interested in is whether God could have made the world in a different way; that is, whether the necessity of logical simplicity leaves any freedom at all.
In this form of physics what you see depends on the glasses you put on and not on what you are looking at. In this form of physics the observer decides if Schödinger's cat is dead or alive by the mere act of looking at the cat, and not the cat itself even if it has nine lives.
PS1 To view $h\nu$ as a packet of energy carried by a photon is non-physical and confusing for several reasons, one being that radiation intensity as energy per unit of time scales as $\nu^2$ and thus the scaling as $\nu$ of photon energy is compensated by a flow of photons per unit time scaling as $\nu$, with each photon occupying a half wave length.
PS2 If now Einstein is a genius by definition, there is as little reason to question that as questioning that there are 100 centimeters on a meter.
torsdag 27 mars 2014
How to Make Schrödinger's Equation Physically Meaningful + Computable
The derivation of Schrödinger's equation as the basic mathematical model of quantum mechanics is hidden in mystery: The idea is somehow to start considering a classical Hamiltonian $H(q,p)$ as the total energy equal to the sum of kinetic and potential energy:
• $H(q,p)=\frac{p^2}{2m} + V(q)$,
where $q(t)$ is position and $p=m\dot q= m\frac{dq}{dt}$ momentum of a moving particle of mass $m$, and make the formal ad hoc substitution with $\bar h =\frac{h}{2\pi}$ and $h$ Planck's constant:
• $p = -i\bar h\nabla$ with formally $\frac{p^2}{2m} = - \frac{\bar h^2}{2m}\nabla^2 = - \frac{\bar h^2} {2m}\Delta$,
to get Schrödinger's equation in time dependent form
• $ i\bar h\frac{\partial\psi}{\partial t}=H\psi $,
with now $H$ a differential operator acting on a wave function $\psi (x,t)$ with $x$ a space coordinate and $t$ time, given by
• $H\psi \equiv -\frac{\bar h^2}{2m}\Delta \psi + V\psi$,
where now $V(x)$ acts as a given potential function. As a time independent eigenvalue problem Schrödinger's equation then takes the form:
with $E$ an eigenvalue, as a stationary value for the total energy
• $K(\psi ) + W(\psi )\equiv\frac{\bar h^2}{2m}\int\vert\nabla\psi\vert^2\, dx +\int V\psi^2\, dx$,
as the sum of kinetic energy $K(\psi )$ and potential energy $W(\psi )$, under the normalization $\int\psi^2\, dx = 1$. The ground state then corresponds to minimal total energy,
We see that the total energy $K(\psi ) + W(\psi)$ can be seen as smoothed version of $H(q,p)$ with
• $V(q)$ replaced by $\int V\psi^2\, dx$,
• $\frac{p^2}{2m}=\frac{m\dot q^2}{2}$ replaced by $\frac{\bar h^2}{2m}\int\vert\nabla\psi\vert^2\, dx$,
and Schrödinger's equation as expressing stationarity of the total energy as an analog the classical equations of motion expressing stationarity of the Hamiltonian $H(p,q)$ under variations of the path $q(t)$.
We conclude that Schrödinger's equation for a one electron system can be seen as a smoothed version of the equation of motion for a classical particle acted upon by a potential force, with Planck's constant serving as a smoothing parameter.
Similarly it is natural to consider smoothed versions of classical many-particle systems as quantum mechanical models resembling Hartree variants of Schrödinger's equation for many-electrons systems, that is quantum mechanics as smoothed particle mechanics, thereby (maybe) reducing some of the mystery of Schrödinger's equation and opening to computable quantum mechanical models.
We see Schrödinger's equation arising from a Hamiltonian as total energy kinetic energy + potential energy, rather that from a Lagrangian as kinetic energy - potential energy. The reason is a confusing terminology with $K(\psi )$ named kinetic energy even though it does not involve time differentiation, while it more naturally should occur in a Lagrangian as a form of the potential energy like elastic energy in classical mechanics.
onsdag 26 mars 2014
New Paradigm of Computational Quantum Mechanics vs ESS
ESS as European Spallation Source is a €3 billion projected research facility captured by clever Swedish politicians to be allocated to the plains outside the old university town Lund in Southern Sweden with start in 2025: Neutrons are excellent for probing materials on the molecular level – everything from motors and medicine, to plastics and proteins. ESS will provide around 30 times brighter neuutron beams than existing facilities today. The difference between the current neutron sources and ESS is something like the difference between taking a picture in the glow of a candle, or doing it under flash lighting.
Quantum mechanics was invented in the 1920s under limits of pen and paper computation but allowing limitless theory thriving in Hilbert spaces populated by multidimensional wave functions described by fancy symbols on paper. Lofty theory and sparse computation was compensated by inflating the observer role of the physicist to a view that only physics observed by a physicist was real physics, with extra support from a conviction that the life or death of Schrödinger's cat depended more on the observer than on the cat and that supercolliders are very expensive. The net result was (i) uncomputable limitless theory combined with (ii) unobservable practice as the essence of the Copenhagen Interpretation filling text books.
Today the computer opens to a change from impossibility to possibility, but this requires a fundamental change of the mathematical models from uncomputable to computable non-linear systems of 3d of Hartree-Schrödinger equations (HSE) or Density Functional Theory (DFT). This brings theory and computation together into a new paradigm of Computational Quantum Mechanics CQM shortly summarized as follows:
1. Experimental inspection of microscopic physics difficult/impossible.
2. HSE-DFT for many-particle systems are solvable computationally.
3. HSE-DFT simulation allows detailed inspection of microscopics.
4. Assessment of HSE simulations can be made by comparing macroscopic outputs with observation.
The linear multidimensional Schrödinger equation has no meaning in CQM and a new foundation is asking to be developed. The role of observation in the Copenhagen Interpretation is taken over by computation in CQM: Only computable physics is real physics, at least if physics is a form of analog computation, which may well be the case. The big difference is that anything computed can be inspected and observed, which opens to non-destructive testing with only limits set by computational power.
The Large Hadron Collider (LHC) and the projected neutron collider European Spallation Source (ESS) in Lund in Sweden represent the old paradigm of smashing to pieces the fragile structure under investigation and as such may well be doomed.
tisdag 25 mars 2014
Fluid Turbulence vs Quantum Electrodynamics
Quantum Physics as Digital Continuum Physics
Quantum mechanics was born in 1900 in Planck's theoretical derivation of a modification of Rayleigh-Jeans law of blackbody radiation based on statistics of discrete "quanta of energy" of size $h\nu$, where $\nu$ is frequency and $h =6.626\times 10^{-34}\, Js$ is Planck's constant.
This was the result of a long fruitless struggle to explain the observed spectrum of radiating bodies using deterministic eletromagnetic wave theory, which ended in Planck's complete surrender to statistics as the only way he could see to avoid the "ultraviolet catastrophe" of infinite radiation energies, in a return to the safe haven of his dissertation work in 1889-90 based on Boltzmann's statistical theory of heat.
Planck described the critical step in his analysis of a radiating blackbody as a discrete collection of resonators as follows:
• We must now give the distribution of the energy over the separate resonators of each frequency, first of all the distribution of the energy $E$ over the $N$ resonators of frequency . If E is considered to be a continuously divisible quantity, this distribution is possible in infinitely many ways.
• We consider, however this is the most essential point of the whole calculation $E$ to be composed of a well-defined number of equal parts and use thereto the constant of nature $h = 6.55\times 10^{-27}\, erg sec$. This constant multiplied by the common frequency of the resonators gives us the energy element in $erg$, and dividing $E$ by we get the number $P$ of energy elements which must be divided over the $N$ resonators.
• If the ratio thus calculated is not an integer, we take for $P$ an integer in the neighbourhood. It is clear that the distribution of P energy elements over $N$ resonators can only take place in a finite, well-defined number of ways.
We here see Planck introducing a constant of nature $h$, later referred to as Planck's constant, with a corresponding smallest quanta of energy $h\nu$ for radiation (light) of frequency $\nu$.
Then Einstein entered in 1905 with a law of photoelectricity with $h\nu$ viewed as the energy of a light quanta of frequency $\nu$ later named photon and crowned as an elementary particle.
Finally, in 1926 Schrödinger formulated a wave equation for involving a formal momentum operator $-ih\nabla$ including Planck's constant $h$, as the birth of quantum mechanics, as the incarnation of modern physics based on postulating that microscopic physics is
1. "quantized" with smallest quanta of energy $h\nu$,
2. indeterministic with discrete quantum jumps obeying laws of statistics.
However, microscopics based on statistics is contradictory, since it requires microscopics of microscopics in an endeless regression, which has led modern physics into an impasse of ever increasing irrationality into manyworlds and string theory as expressions of scientific regression to microscopics of microscopics. The idea of "quantization" of the microscopic world goes back to the atomism of Democritus, a primitive scientific idea rejected already by Aristotle arguing for the continuum, which however combined with modern statistics has ruined physics.
But there is another way of avoiding the ultraviolet catastrophe without statistics, which is presented on Computational Blackbody Radiation with physics viewed as analog finite precision computation which can be modeled as digital computational simulation
This is physics governed by deterministic wave equations with solutions evolving in analog computational processes, which can be simulated digitally. This is physics without microscopic games of roulette as rational deterministic classical physics subject only to natural limitations of finite precision computation.
This opens to a view of quantum physics as digital continuum physics which can bring rationality back to physics. It opens to explore an analog physical atomistic world as a digital simulated world where the digital simulation reconnects to analog microelectronics. It opens to explore physics by exploring the digital model, readily available for inspection and analysis in contrast to analog physics hidden to inspection.
The microprocessor world is "quantized" into discrete processing units but it is a deterministic world with digital output:
måndag 24 mars 2014
Hollywood vs Principle of Least Action
The fictional character of the Principle of Least Action viewed to serve a fundamental role in physics, can be understood by comparing with making movies:
The dimension of action as energy x time comes out very naturally in movie making as actor energy x length of the scene. However, outside Hollywood a quantity of dimension energy x time is questionable from physical point of view, since there seems to be no natural movie camera which can record and store such a quantity.
söndag 23 mars 2014
Why the Same Universal Quantum of Action $h$ in Radiation, Photoelectricity and Quantum Mechanics?
Planck's constant $h$ as The Universal Quantum of Action was introduced by Planck in 1900 as a mathematical statistical trick to supply the classical Rayleigh-Jeans radiation law $I(\nu ,T)=\gamma T\nu^2$ with a high-frequency cut-off factor $\theta (\nu ,T)$ to make it fit with observations including Wien's displacement law, where
• $\theta (\nu ,T) =\frac{\alpha}{\exp(\alpha )-1}$,
• $\alpha =\frac{h\nu}{kT}$,
$\nu$ is frequency, $T$ temperature in Kelvin $K$ and $k =1.38066\times 10^{-23}\, J/K$ is Boltzmann's constant and $\gamma =\frac{2k}{c}$ with $c\, m/s$ the speed of light in vaccum. Planck then determined $h$ from experimental radiation spectra to have a value of $6.55\times 10^{-34} Js$, as well as Boltzmann's constant to be $1.346\times 10^{-23}\, J/K$ with $\frac{h}{k}= 4.87\times 10^{-11}\, Ks$ as the effective parameter in the cut-off.
Planck viewed $h$ as a fictional mathematical quantity without real physical meaning, with $h\nu$ a fictional mathematical quantity as a smallest packet of energy of a wave of frequency $\nu$, but in 1905 the young ambitious Einstein suggested an energy balance for photoelectricity of the form
• $h\nu = W + E$,
with $W$ the energy required to release one electron from a metallic surface and E the energy of a released electron with $h\nu$ interpreted as the energy of a light photon of frequency $\nu$ as a discrete lump of energy. Since the left hand side $h\nu$ in this law of photoelectricity was determined by the value of $h$ in Planck's radiation law, a new energy measure for electrons of electronvolt was defined by the relation $W + E =h\nu$. As if by magic the same Universal Quantum of Action $h$ then appeared to serve a fundamental role in both radiation and photoelectricity.
What a wonderful magical coincidence that the energy of a light photon of frequency $\nu$ showed to be exactly $h\nu \, Joule$! In one shot Planck's fictional smallest quanta of energy $h\nu$ in the hands of the young ambitious Einstein had been turned into a reality as the energy of a light photon of frequency $h\nu$, and of course because a photon carries a definite packet of energy a photon must be real. Voila!
In 1926 Planck's constant $h$ showed up again in a new context, now in Schrödinger's equation
• $-\frac{\bar h^2}{2m}\Delta\psi = E\psi$
with the formal connection
• $p = -i\bar h \nabla$ with $\bar h =\frac{h}{2\pi}$,
• $\frac{\vert p\vert^2}{2m} = E$,
as a formal analog of the classical expression of kinetic energy $\frac{\vert p\vert ^2}{2m}$ with $p=mv$ momentum, $m$ mass and $v$ velocity.
Planck's constant $h$ originally determined to make theory fit with observations of radiation spectra and then by Planck in 1900 canonized as The Universal Quantum of Action, thus in 1905 served to attribute the energy $h\nu$ to the new fictional formal quantity of a photon of frequency $\nu$ . In 1926 a similar formal connection was made in the formulation of Schrödinger's wave equation.
The result is that the same Universal Quantum of Action $h$ by all modern physicists is claimed to play a fundamental role in both (i) radiation, (ii) photolelectricity and (iii) quantum mechanics of the atom. This is taken as an expression of a deep mystical one-ness of physics which only physicists can grasp, while it in fact it is a play with definitions without mystery, where $h$ appears as a parameter in a high-frequency cut-off factor in Planck's Law, or rather in the combination $\hat h =\frac{h}{k}$, and then is transferred into (ii) and (iii) by definition. Universality can this way be created by human hands by definition. The power of thinking has no limitations, or cut-off.
No wonder that Schrödinger had lifelong interest in the Vedanta philosophy of Hinduism "played out on one universal consciousness".
But Einstein's invention of the photon as light quanta in 1905 haunted him through life and approaching the end in 1954, he confessed:
• All these fifty years of conscious brooding have brought me no nearer to the answer to the question, "What are light quanta?" Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken.
Real physics always shows up to be more interesting than fictional physics, cf. Dr Faustus ofd Modern Physics.
PS Planck's constant $h$ is usually measured by (ii) and is then transferred to (i) and (iii) by ad hoc definition.
The Torturer's Dilemma vs Uncertainty Principle vs Computational Simulation
Bohr expressed in Light and Life (1933) the Thantalogical Principle stating that to check out the nature of something, one has to destroy that very nature, which we refer to as The Torturer's Dilemma:
• We should doubtless kill an animal if we tried to carry the investigations of its organs so far that we could describe the role played by single atoms in vital functions. In every experiment on living organisms, there must remain an uncertainty as regards the physical conditions to which they are subjected…the existence of life must be considered as an elementary fact that cannot be explained, but must be taken as a starting point in biology, in a similar way as the quantum of action, which appears as an irrational element from the point of view of classical mechanics, taken together with the existence of the elementary particles, forms the foundation of atomic physics.
• It has turned out, in fact, that all effects of light may be traced down to individual processes, in which a so-calles light quantum is exchanged, the energy of which is equal to the product of the frequency of the electromagnetic oscillations and the universal quantum of action, or Planck's constant. The striking contrast between this atomicity of the light phenomenon and the continuity of of the energy transfer according to the electromagnetic theory, places us before a dilemma of a character hitherto unknwown in physics.
Bohr's starting point for his "Copenhagen" version of quantum mechanics still dominating text books, was:
• Planck's discovery of the universal quantum of action which revealed a feature of wholeness in individual atomic processes defying casual description in space and time.
• Planck's discovery of the universal quantum of action taught us that the wide applicability of the accustomed description of the behaviour of matter in bulk rests entirely on the circumstance that the action involved in phenomena on the ordinary scale is so large that the quantum can be completely neglected. (The Connection Between the Sciences, 1960)
Bohr thus argued that the success of the notion of universal quantum of action depends on the fact that in can be completely neglected.
The explosion of digital computation since Bohr's time offers a new way of resolving the impossibility of detailed inspection of microscopics, by a allowing detailed non-invasive inspection of computational simulation of microscopics. With this perspective efforts should be directed to development of computable models of microscopics, rather than smashing high speed protons or neutrons into innocent atoms in order to find out their inner secrets, without getting reliable answers.
lördag 22 mars 2014
The True Meaning of Planck's Constant as Measure of Wavelength of Maximal Radiance and Small-Wavelength Cut-off.
The modern physics of quantum mechanics was born in 1900 when Max Planck after many unsuccessful attempts in an "act of despair" introduced a universal smallest quantum of action $h= 6.626\times 10^{-34}\, Js = 4.12\times 10^{-15}\, eVs$ named Planck's constant in a theoretical justification of the spectrum of radiating bodies observed in experiments, based on statistics of packets of energy of size $h\nu$ with $\nu$ frequency.
Planck describes this monumental moment in the history of science in his 1918 Nobel Lecture as follows:
• For many years, such an aim for me was to find the solution to the problem of the distribution of energy in the normal spectrum of radiating heat.
• Nevertheless, the result meant no more than a preparatory step towards the initial onslaught on the particular problem which now towered with all its fearsome height even steeper before me. The first attempt upon it went wrong…
• So there was nothing left for me but to tackle the problem from the opposite side, that of thermodynamics, in which field I felt, moreover, more confident.
• Since the whole problem concerned a universal law of Nature, and since at that time, as still today, I held the unshakeable opinion that the simpler the presentation of a particular law of Nature, the more general it is…
• For this reason, I busied myself, from then on, that is, from the day of its establishment, with the task of elucidating a true physical character for the formula, and this problem led me automatically to a consideration of the connection between entropy and probability, that is, Boltzmann's trend of ideas; until after some weeks of the most strenuous work of my life, light came into the darkness, and a new undreamed-of perspective opened up before me.
Planck thus finally succeeded to prove Planck's radiation law as a modification of Rayleigh-Jeans law with a high-frequency cut-off factor eliminating "the ultraviolet catastrophe" which had paralyzed physics shortly after the introduction of Maxwell's wave equations for electromagnetics as the culmination of classical physics.
Planck's constant $h$ enters Planck's law
• $I(\nu ,T)=\gamma \theta (\nu , T)\nu^2 T$, where $\gamma =\frac{2k}{c^2}$,
where $I(\nu ,T)$ is normalized radiance, as a parameter in the multiplicative factor
• $\theta (\nu ,T)=\frac{\alpha}{e^{\alpha} -1}$,
where $\nu$ is frequency, $T$ temperature in Kelvin $K$ and $k = 1.38\times 10^{-23}\, J/K = 8.62\times 10^{-5}\, eV/K$ is Boltzmann's constant and $c\, m/s$ the speed of light.
We see that $\theta (\nu ,T)\approx 1$ for small $\alpha$ and enforces a high-frequency small-wavelength cut-off for $\alpha > 10$, that is, for
• $\nu > \nu_{max}\approx \frac{10T}{\hat h}$ where $\hat h =\frac{h}{k}=4.8\times 10^{-11}\, Ks$,
• $\lambda < \lambda_{min}\approx \frac{c}{10T}\hat h$ where $\nu\lambda =c$,
with maximal radiance occuring for $\alpha = 2.821$ in accordance with Wien's displacement law. With $T = 1000\, K$ the cut-off is in the visible range for $\nu\approx 2\times 10^{14}$ and $\lambda\approx 10^{-6}\, m$. We see that the relation
• $\frac{c}{10T}\hat h =\lambda_{min}$,
gives $\hat h$ a physical meaning as measure of wave-length of maximal radiance and small-wavelength cut-off of atomic size scaling with $\frac{c}{T}$.
Modern physicsts are trained to believe that Planck's constant $h$ as the universal quantum of action represents a smallest unit of a "quantized" world with a corresponding Planck length $l_p= 1.62\times 10^{-35}$ as a smallest unit of length, about 20 orders of magnitude smaller than the proton diameter.
We have seen that Planck's constant enters in Planck's radiation law in the form $\hat h =\frac{h}{k}$, and not as $h$, and that $\hat h$ has the role of setting a small-wavelength cut-off scaling with $\frac{c}{T}$.
Small-wavelength cut-off in the radiation from a body is possible to envision in wave mechanics as an expression of finite precision analog computation. In this perspective Planck's universal quantum of action emerges as unnecessary fiction about exceedingly small quantities beyond reason and reality.
torsdag 20 mars 2014
Principle of Least Action vs Adam Smith's Invisible Hand
Violation of the PLA of the capitalistic system in 1929.
The Principle of Least Action (PLA) expressing
• Stationarity of the Action (the integral in time of the Lagrangian),
with the Lagrangian the difference between kinetic and potential energies, is cherished by physicists as a deep truth about physics: Tell me the Lagrangian and I will tell you the physics, because a dynamical system will (by reaction to local forces) evolve so as to keep the Action stationary as if led by an invisible hand steering the system towards a final cause of least action.
PLA is similar to the invisible hand of Adam Smith supposedly steering an economy towards a final cause of maximal effectivity or least action (maximal common happiness) by asking each member of the economy to seek to maximize individual profit (individual happiness). This is the essence of the capitalistic system. The idea is that a final cause of maximal effectivity can be reached without telling the members the meaning of the whole thing, just telling each one to seek to maximize his/her own individual profit (happiness).
Today the capitalistic system is shaking and nobody knows how to steer towards a final cause of maximal efficiency. So the PLA of economy seems to be rather empty of content. It may be that similarly the PLA of physics is void of real physics. In particular, the idea of a smallest quantum of action as a basis of quantum mechanics, may well be unphysical.
Till Per-Anders Ivert Redaktör för SMS-Bulletinen
Jag har skickat följande inlägg till Svenska Matematikersamfundets medlemsblad Bulletinen med anledning av redaktör Per-Anders Iverts inledande ord i februarinummret 2014.
Till SMS-Bulletinen
Redaktör Per-Anders Ivert inleder februarinummret av Bulletinen med: "Apropå reaktioner; det kommer sällan sådana, men jag uppmärksammades på en rolig reaktion på något jag skrev för några nummer sedan och som handlade om huruvida skolmatematiken behövs. En jeppe från Chalmers, en person som jag inte känner och tror att jag aldrig varit i kontakt med, skrev på sin blogg":
• Oktobernumret av Svenska Matematikersamfundets Bulletin tar upp frågan om skolmatematiken ”behövs”.
• Ordförande Per- Anders Ivert inleder med Själv kan jag inte svara på vad som behövs och inte behövs. Det beror på vad man menar med ”behövs” och även på hur skolmatematiken ser ut.
• Ulf Persson följer upp med en betraktelse som inleds med: Det tycks vara ett faktum att en stor del av befolkningen avskyr matematik och finner skolmatematiken plågsam.
• Ivert och Persson uttrycker den vilsenhet, och därav kommande ångest, som präglar matematikerns syn på sitt ämnes roll i skolan av idag: Yrkesmatematikern vet inte om skolmatematiken längre ”behövs” och då vet inte skolmatematikern och eleven det heller.
Ivert fortsätter med:
• "När jag såg detta blev jag rätt förvånad. Jag trodde att mina citerade ord var fullkomligt okontroversiella, och jag förstod inte riktigt vad som motiverade sarkasmen ”ordförande”. Den här Chalmersliraren trodde nog inte att jag var ordförande för Samfundet, utan det ska väl föreställa någon anspelning på östasiatiska politiska strukturer".
• "Vid en närmare läsning såg jag dock att Ulf Persson hade kritiserat den här bloggaren i sin text, vilket tydligen hade lett till en mental kortslutning hos bloggaren och associationerna hade börjat gå kors och tvärs. Om man vill fundera över min ”vilsenhet och ångest” så bjuder jag på en del underlag i detta nummer".
Iverts utläggning om "jeppe på Chalmers" och "Chalmerslirare" skall ses mot bakgrund av det öppna brev till Svenska Matematikersamfundet of Nationalkommitten för Matematik, som jag publicerade på min blogg 22 dec 2013, och där jag efterfrågade vilket ansvar Samfundet och Kommitten har för matematikundervisningen i landet, inklusive skolmatematiken och det pågående Matematiklyftet.
Trots ett flertal påminnelser har jag inte fått något svar varken från Samfundet (ordf Pär Kurlberg) eller Kommitten (Torbjörn Lundh) eller KVA-Matematik (Nils Dencker), och jag ställer denna fråga än en gång nu direkt till Dig Per-Anders Ivert: Om Du och Samfundet inte har drabbats av någon "vilsenhet och ångest" så måste Du kunna ge ett svar och publicera detta tillsammans med detta mitt inlägg i nästa nummer av Bulletinen.
Med anledning av Ulf Perssons inlägg under Ordet är mitt, kan man säga att det som räknas vad gäller kunskap är skillnad i kunskap: Det alla kan har ringa intresse. En skola som främst satsar på att ge alla en gemensam baskunskap, vad den än må vara, har svårt att motivera eleverna och är förödande både de många som inte uppnår de gemensamma målen och för de något färre som skulle kunna prestera mycket bättre. Sålänge Euklidisk geometri och latin var reserverade för liten del av eleverna, kunde motivation skapas och studiemål uppnås, tämligen oberoende av intellektuell kapacitet och social bakgrund hos elever (och lärare). Matematiklyftet som skall lyfta alla, är ett tomt slag i luften till stora kostnader.
Epiteten om min person i Bulletinen har nu utvidgats från "Johnsonligan" till "jeppe på Chalmers" och "Chalmerslirare", det senare kanske inte längre så aktuellt då jag flyttade till KTH för 7 år sedan. Per-Anders ondgör sig över språklig förflackning, men där ingår uppenbarligen inte "jeppe", "lirare" och "mental kortslutning".
Claes Johnson
prof em i tillämpad matematik KTH
onsdag 19 mars 2014
Lagrange's Biggest Mistake: Least Action Principle Not Physics!
The Principle of Least Action formulated by Lagrange in his monumental treatise Mecanique Analytique (1811) collecting 50 years work, is viewed to be the crown jewel of the Calculus of Newton and Leibniz as the mathematical basis of the scientific revolution:
• The equations of motion of a dynamical system are the same equations that express that the action as the integral over time of the difference of kinetic and potential energies, is stationary that is does not change under small variations.
The basic idea goes back to Leibniz:
• In change of motion, the quantity of action takes on a Maximum or Minimum.
And to Maupertis (1746):
• Whenever any action occurs in nature, the quantity of action employed in this change is the least possible.
In mathematical terms, the Principle of Least Action expresses that the trajectory $u(t)$ followed by a dynamical system over a given time interval $I$ with time coordinate $t$, is determined by the condition of stationarity of the action:
• $\frac{d}{d\epsilon}\int_I(T(u(t)+\epsilon v(t)) - V(u(t)+\epsilon v(t)))\, dt =0$,
where $T(u(t))$ is kinetic energy and $V(u(t))$ is potential energy of $u(t)$ at time $t$, and $v(t)$ is an arbitrary perturbation of $u(t)$, combined with an initial condition. In the basic case of a harmonic oscillator;
• $T(u(t))=\frac{1}{2}\dot u^2(t)$ with $\dot u=\frac{du}{dt}$,
• $V(u(t))=\frac{1}{2}u^2(t)$
• stationarity is expressed as force balance as Newton's 2nd law: $\ddot u (t) +u(t) = 0$.
The Principle of Least Action is viewed as a constructive way of deriving the equations of motion expressing force balance according to Newton's 2nd law, in situations with specific choices of coordinates for which direct establishment of the equations is tricky.
From the success in this respect the Principle of Least Action has been elevated from mathematical trick to physical principle asking Nature to arrange itself so as to keep the action stationary, as if Nature could compare the action integral for different trajectories and choose the trajectory with least action towards a teleological final cause, while in fact Nature can only respond to forces as expressed in equations of motion.
But if Nature does not have the capability of evaluating and comparing action integrals, it can be misleading to think this way. In the worst case it leads to invention of physics without real meaning, which is acknowledged by Lagrange in the Preface to Mecanique Analytique.
The ultimate example is the very foundation of quantum physics as the pillar of modern physics based on a concept of elementary (smallest) quantum of action denoted by $h$ and named Planck's constant with dimension $force \times time$. Physicists are trained to view the elementary quantum of action to represent a "quantization" of reality expressed as follows on Wikipedia:
• In physics, a quantum (plural: quanta) is the minimum amount of any physical entity involved in an interaction.
In the quantum world light consists of a stream of discrete light quanta named photons. Although Einstein in his 1905 article on the photoelectric effect found it useful as a heuristic idea to speak about light quanta, he later changed mind:
• The quanta really are a hopeless mess. (to Pauli)
But nobody listened to Einstein and there we are today with an elementary quantum of action which is viewed as the basis of modern physics but has not physical reality. Schrödinger supported by Einstein said:
• There are no particles or quanta. All is waves.
Connecting to the previous post, note that to compute a solution according the Principle of Least Action typically an iterative method based on relaxation of the equations of motion is used, which has a physical meaning as response to imbalance of forces. This shows the strong connection between computational mathematics as iterative time-stepping and analog physics as motion in time subject to forces, which can be seen as a mindless evolution towards a hidden final cause, as if directed by an invisible hand of a mind understanding the final cause.
Physics as Analog Computation instead of Physics as Observation
Bohr plotting the Copenhagen Interpretation of quantum mechanics together with Heisenberg and Pauli (left) and Bohr wondering what he did 30 years later (right).
To view physics as a form of analog computation which can be simulated by digital computation offers resolutions of the following main unsolved problems of modern microscopic and classical macroscopic physics:
1. Interaction between subject (experimental apparatus) and object under observation.
2. Meaning of smallest quantum of action named Planck's constant $h$.
3. Contradiction between particle and wave qualities. Particle-wave duality.
4. Meaning of the 2nd law of thermodynamics and direction of time.
5. Meaning of Heisenberg's Uncertainity Principle.
6. Loss of cause-effect relation by resort of microscopic statistics.
7. Statistical interpretation of Schrödinger's multidimensional wave function.
8. Meaning of Bohr's Complementarity Principle.
9. Meaning of Least Action Principle.
This view is explored on The World as Computation and Computational Blackbody Radiation suggesting the following answers to these basic problems:
1. Observation by digital simulation is possible without interference with physical object.
2. Planck's constant $h$ can be viewed as a computational mesh size parameter.
3. All is wave. There are no particles. No particle-wave duality.
4. Dissipation as an effect of finite precision computation gives a 2nd law and direction of time.
5. Uncertainty Principle as effect of finite precision computation.
6. Statistics replaced by finite precision computation.
7. Schrödinger's wave equation as system in 3d without statistical interpretation.
8. No contradiction between complementary properties. No need of Complementarity Principle.
9. Least Action Principle as computational mathematical principle without physical reality.
The textbook physics harboring the unsolved problems is well summarized by Bohr:
• There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature…
• Everything we call real is made of things that cannot be regarded as real. If quantum mechanics hasn't profoundly shocked you, you haven't understood it yet.
• We must be clear that when it comes to atoms, language can be used only as in poetry. The poet, too, is not nearly so concerned with describing facts as with creating images and establishing mental connections.
tisdag 18 mars 2014
Blackbody as Linear High Gain Amplifier
A blackbody acts as a high gain linear (black) amplifier.
The analysis on Computational Blackbody Radiation (with book) shows that a radiating body can be seen as a linear high gain amplifier with a high-frequency cut-off scaling with noise temperature, modeled by a wave equation with small damping, which after Fourier decomposition in space takes the form of a damped linear oscillator for each wave frequency $\nu$:
• $\ddot u_\nu +\nu^2u_\nu - \gamma\dddot u_\nu = f_\nu$,
where $u_\nu(t)$ is oscillator amplitude and $f_\nu (t)$ signal amplitude of wave frequency $\nu$ with $t$ time, the dot indicates differentiation with respect to $t$, and $\gamma$ is a small constant satisfying $\gamma\nu^2 << 1$ and the frequency is subject to a cut-off of the form $\nu < \frac{T_\nu}{h}$, where
• $T_\nu =\overline{\dot u_\nu^2}\equiv\int_I \dot u_\nu^2(t)\, dt$,
is the (noise) temperature of frequency of $\nu$, $I$ a unit time interval and $h$ is a constant representing a level of finite precision.
The analysis shows under an assumption of near resonance, the following basic relation in stationary state:
• $\gamma\overline{\ddot u_\nu^2} \approx \overline{f_\nu^2}$,
as a consequence of small damping guiding $u_\nu (t)$ so that $\dot u_\nu(t)$ is out of phase with $f_\nu(t)$ and thus "pumps" the system little. The result is that the signal $f_\nu (t)$ is balanced to major part by the oscillator
• $\ddot u_\nu +\nu^2u_\nu$,
and to minor part by the damping
• $ - \gamma\dddot u_\nu$,
• $\gamma^2\overline{\dddot u_\nu^2} \approx \gamma\nu^2 \gamma\overline{\ddot u_\nu^2}\approx\gamma\nu^2\overline{f_\nu^2} <<\overline{f_\nu^2}$.
This means that the blackbody can be viewed to act as an amplifier radiating the signal $f_\nu$ under the small input $-\gamma \dddot u_\nu$, thus with a high gain. The high frequency cut-off then gives a requirement on the temperature $T_\nu$, referred to as noise temperature, to achieve high gain.
Quantum Mechanics from Blackbody Radiation as "Act of Despair"
Max Planck: The whole procedure was an act of despair because a theoretical interpretation (of black-body radiation) had to be found at any price, no matter how high that might be…I was ready to sacrifice any of my previous convictions about physics..For this reason, on the very first day when I formulated this law, I began to devote myself to the task of investing it with true physical meaning.
The textbook history of modern physics tells that quantum mechanics was born from Planck's proof of the universal law of blackbody radiation based on an statistics of discrete lumps of energy or energy quanta $h\nu$, where $h$ is Planck's constant and $\nu$ frequency. The textbook definition of a blackbody is a body which absorbs all, reflects none and re-emits all of incident radiation:
• A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. (Wikipedia)
• "Blackbody radiation" or "cavity radiation" refers to an object or system which absorbs all radiation incident upon it and re-radiates energy which is characteristic of this radiating system only, not dependent upon the type of radiation which is incident upon it. (Hyperphysics)
• Theoretical surface that absorbs all radiant energy that falls on it, and radiates electromagnetic energy at all frequencies, from radio waves to gamma rays, with an intensity distribution dependent on its temperature. (Merriam-Webster)
• An ideal object that is a perfect absorber of light (hence the name since it would appear completely black if it were cold), and also a perfect emitter of light. (Astro Virginia)
• A black body is a theoretical object that absorbs 100% of the radiation that hits it. Therefore it reflects no radiation and appears perfectly black. (Egglescliff)
• A hypothetic body that completely absorbs all wavelengths of thermal radiation incident on it. (Eric Weisstein's World of Physics
But there is something more to a blackbody and that is a the high frequency cut-off, expressed in Wien's displacement law, of the principal form
• $\nu < \frac{T}{\hat h}$,
where $\nu$ is frequency, $T$ temperature and $\hat h$ a Planck constant, stating that only frequencies below the cut-off $\frac{T}{\hat h}$ are re-emitted. Absorbed frequencies above the cut-off will then be stored as internal energy in the body under increasing temperature,
Bodies which absorb all incident radiation made of different materials will have different high-frequency cut-off and an (ideal) blackbody should then be characterized as having maximal cut-off, that is smallest Planck constant $\hat h$, with the maximum taken over all real bodies.
A cavity with graphite walls is used as a reference blackbody defined by the following properties:
1. absorption of all incident radiation
2. maximal cut-off - smallest Planck constant $\hat h\approx 4.8\times 10^{-11}\, Ks$,
and $\hat h =\frac{h}{k}$ is Planck's constant $h$ scaled by Boltzmann's constant $k$.
Planck viewed the high frequency cut-off defined by the Planck constant $\hat h$ to be inexplicable in Maxwell's classical electromagnetic wave theory. In an "act of despair" to save physics from collapse in an "ultraviolet catastrophe", a role which Planck had taken on, Planck then resorted to statistics of discrete energy quanta $h\nu$, which in the 1920s resurfaced as a basic element of quantum mechanics.
But a high frequency cut-off in wave mechanics is not inexplicable, but is a well known phenomenon in all forms of waves including elastic, acoustic and electromagnetic waves, and can be modeled as a disspiative loss effect, where high frequency wave motion is broken down into chaotic motion stored as internal heat energy. For details, see Computational Blackbody Radiation.
It is a mystery why this was not understood by Planck. Science created in an "act of despair" runs the risk of being irrational and flat wrong, and that is if anything the trademark of quantum mechanics based on discrete quanta.
Quantum mechanics as deterministic wave mechanics may be rational and understandable. Quantum mechanics as statistics of quanta is irrational and confusing. All the troubles and mysteries of quantum mechanics emanate from the idea of discrete quanta. Schrödinger had the solution:
• I insist upon the view that all is waves.
• If all this damned quantum jumping were really here to stay, I should be sorry I ever got involved with quantum theory.
But Schrödinger was overpowered by Bohr and Heisenberg, who have twisted the brains of modern physicists with devastating consequences...
måndag 17 mars 2014
Unphysical Combination of Complementary Experiments
Let us take a look at how Bohr in his famous 1927 Como Lecture describes complementarity as a fundamental aspect of Bohr's Copenhagen Interpretation still dominating textbook presentations of quantum mechanics:
• The quantum theory is characterised by the acknowledgment of a fundamental limitation in the classical physical ideas when applied to atomic phenomena. The situation thus created is of a peculiar nature, since our interpretation of the experimental material rests essentially upon the classical concepts.
• Notwithstanding the difficulties which hence are involved in the formulation of the quantum theory, it seems, as we shall see, that its essence may be expressed in the so-called quantum postulate, which attributes to any atomic process an essential discontinuity, or rather individuality, completely foreign to the classical theories and symbolised by Planck's quantum of action.
OK, we learn that quantum theory is based on a quantum postulate about an essential discontinuity symbolised as Planck's constant $h=6.626\times 10^{-34}\, Js$ as a quantum of action. Next we read about necessary interaction between the phenomena under observation and the observer:
• Now the quantum postulate implies that any observation of atomic phenomena will involve an interaction with the agency of observation not to be neglected.
• Accordingly, an independent reality in the ordinary physical sense can neither be ascribed to the phenomena nor to the agencies of observation.
• The circumstance, however, that in interpreting observations use has always to be made of theoretical notions, entails that for every particular case it is a question of convenience at what point the concept of observation involving the quantum postulate with its inherent 'irrationality' is brought in.
Next, Bohr emphasizes the contrast between the quantum of action and classical concepts:
• The fundamental contrast between the quantum of action and the classical concepts is immediately apparent from the simple formulas which form the common foundation of the theory of light quanta and of the wave theory of material particles. If Planck's constant be denoted by $h$, as is well known: $E\tau = I \lambda = h$where $E$ and $I$ are energy and momentum respectively, $\tau$ and $\lambda$ the corresponding period of vibration and wave-length.
• In these formulae the two notions of light and also of matter enter in sharp contrast.
• While energy and momentum are associated with the concept of particles, and hence may be characterised according to the classical point of view by definite space-time co-ordinates, the period of vibration and wave-length refer to a plane harmonic wave train of unlimited extent in space and time.
• Just this situation brings out most strikingly the complementary character of the description of atomic phenomena which appears as an inevitable consequence of the contrast between the quantum postulate and the distinction between object and agency of measurement, inherent in our very idea of observation.
Bohr clearly brings out the unphysical aspects of the basic action formula
• $E\tau = I \lambda = h$,
where energy $E$ and momentum $I$ related to particle are combined with period $\tau$ and wave-length $\lambda$ related to wave.
Bohr then seeks to resolve the contradiction by naming it complementarity as an effect of interaction between instrument and object:
• Consequently, evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects.
• In quantum mechanics, however, evidence about atomic objects obtained by different experimental arrangements exhibits a novel kind of complementary relationship.
• … the notion of complementarity simply characterizes the answers we can receive by such inquiry, whenever the interaction between the measuring instruments and the objects form an integral part of the phenomena.
Bohr's complementarity principle has been questioned by many over the years:
• Bohr’s interpretation of quantum mechanics has been criticized as incoherent and opportunistic, and based on doubtful philosophical premises. (Simon Saunders)
• Despite the expenditure of much effort, I have been unable to obtain a clear understanding of Bohr’s principle of complementarity (Einstein).
Of course an object may have complementary qualities such as e.g. color and weight, which can be measured in different experiments, but it is meaningless to form a new concept as color times weight or colorweight and then desperately seek to give it a meaning.
In the New View presented on Computational Blackbody Radiation the concept of action as e.g position times velocity has a meaning in a threshold condition for dissipation, but is not a measure of a quantity which is carried by a physical object such as mass and energy.
The ruling Copenhagen interpretation was developed by Bohr contributing a complementarity principle and Heisenberg contributing a related uncertainty principle based position times momentum (or velocity) as Bohr's unphysical complementary combination. The uncertainty principle is often expressed as a lower bound on the product of weighted norms of a function and its Fourier transform, and then interpreted as combat between localization in space and frequency or between particle and wave. In this form of the uncertainty principle the unphysical aspect of a product of position and frequency is hidden by mathematics.
The Copenhagen Interpretation was completed by Born's suggestion to view (the square of the modulus of) Schrödinger's wave function as a probability distribution for particle configuration, which in the absence of something better became the accepted way to handle the apparent wave-particle contradiction, by viewing it as a combination of probability wave with particle distribution.
New Uncertainty Principle as Wien's Displacement Law
The recent series of posts based on Computational Blackbody Radiation suggest that Heisenberg's Uncertainty Principle can be understood as a consequence of Wien's Displacement Law expressing high-frequency cut-off in blackbody radiation scaling with temperature according to Planck's radiation law:
• $B_\nu (T)=\gamma\nu^2T\times \theta(\nu ,T)$,
where $B_\nu (T)$ is radiated energy per unit frequency, surface area, viewing angle and second, $\gamma =\frac{2k}{c^2}$ where $k = 1.3806488\times 10^{-23} m^2 kg/s^2 K$ is Boltzmann's constant and $c$ the speed of light in $m/s$, $T$ is temperature in Kelvin $K$,
• $\theta (\nu ,T)=\frac{\alpha}{e^\alpha -1}$,
where $\theta (\nu ,T)\approx 1$ for $\alpha < 1$ and $\theta (\nu ,T)\approx 0$ for $\alpha > 10$ as high frequency cut-off with $h=6.626\times 10^{-34}\, Js$ Planck's constant. More precisely, maximal radiance for a given temperature occurs $T$ for $\alpha \approx 2.821$ with corresponding frequency
• $\nu_{max} = 2.821\frac{T}{\hat h}$ where $\hat h=\frac{h}{k}=4.8\times 10^{-11}\, Ks$,
with a rapid drop for $\nu >\nu_{max}$.
The proof of Planck's Law in Computational Blackbody Radiation explains the high frequency cut-off as a consequence of finite precision computation introducing a dissipative effect damping high-frequencies.
A connection to Heisenbergs Uncertainty Principle can be made by noting that a high-frequency cut-off condition of the form
can be rephrased in the following form connecting to Heisenberg's Uncertainty Principle:
• $u_\nu\dot u_\nu > \hat h$ (New Uncertainty Principle)
where $u_\nu$ is position amplitude, $\dot u_\nu =\nu u_\nu$ is velocity amplitude of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$.
The New Uncertainty Principle expresses that observation/detection of a wave, that is observation/detection of amplitude $u$ and frequency $\nu =\frac{\dot u}{u}$ of a wave, requires
• $u\dot u>\hat h$.
The New Uncertainty Principle concerns observation/detection amplitude and frequency as physical aspects of wave motion, and not as Heisenberg's Uncertainty Principle particle position and wave frequency as unphysical complementary aspects.
söndag 16 mars 2014
Uncertainty Principle, Whispering and Looking at a Faint Star
The recent series of posts on Heisenberg's Uncertainty Principle based on Computational Blackbody Radiation suggests the following alternative equivalent formulations of the principle:
1. $\nu < \frac{T}{\hat h}$,
2. $u_\nu\dot u_\nu > \hat h$,
where $u_\nu$ is position amplitude, $\dot u_\nu =\nu u_\nu$ is velocity amplitude of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$, and $\hat h =4.8\times 10^{-11}Ks$ is Planck's constant scaled with Boltzmann's constant.
Here, 1 represents Wien's displacement law stating that the radiation from a body is subject to a frequency limit scaling with temperature $T$ with the factor $\frac{1}{\hat h}$.
2 is superficially similar to Heisenberg's Uncertainty Principle as an expression of the following physics: In order to detect a wave of amplitude $u$, it is necessary that the frequency $\nu$ of the wave satisfies $\nu u^2>h$. In particular, if the amplitude $u$ is small, then the frequency $\nu$ must be large.
This connects to (i) communication by whispering and (ii) viewing a distant star, both being based on the possibility of detecting small amplitude high-frequency waves.
The standard presentation of Heisenberg's Uncertainty Principle is loaded with contradictions:
• The uncertainty principle is certainly one of the most famous and important aspects of quantum mechanics.
• But what is the exact meaning of this principle, and indeed, is it really a principle of quantum mechanics? And, in particular, what does it mean to say that a quantity is determined only up to some uncertainty?
• So the question may be asked what alternative views of the uncertainty relations are still viable.
• Of course, this problem is intimately connected with that of the interpretation of the wave function, and hence of quantum mechanics as a whole.
• Since there is no consensus about the latter, one cannot expect consensus about the interpretation of the uncertainty relations either.
In other words, today there is no consensus on the meaning of Heisenberg's Uncertainty principle. The reason may be that it has no meaning, but that there is an alternative which is meaningful.
Notice in particular that the product of two complementary or conjugate variables such as position and momentum is questionable if viewed as representing a physical quantity, while as threshold it can make sense.
fredag 14 mars 2014
DN Debatt: Offentlighetsprincipen Vittrar Bort genom Plattläggningsparagrafer
Nils Funcke konstaterar på DN Debatt under Offentlighetsprincipen är på väg att vittra bort:
• Den svenska offentlighetsprincipen nöts sakta men säkert ned.
• ..rena plattläggningsparagrafer accepteras…
• Vid EU inträdet 1995 avgav Sverige en deklaration: Offentlighetsprincipen, särskilt rätten att ta del av allmänna handlingar, och grundlagsskyddet för meddelarfriheten, är och förblir grundläggande principer som utgör en del av Sveriges konstitutionella, politiska och kulturella arv.
Ett exempel på plattläggningsparagraf är Högsta Förvaltningdomsstolens nya prejudikat:
• För att en handling skall vara färdigställd och därmed vara upprättad och därmed vara allmän handling, krävs att någon åtgärd vidtas som visar att handlingen är färdigställd.
Med denna nya lagparagraf lägger HFD medborgaren platt på marken under myndigheten som nu själv kan bestämma om och när den åtgärd som enligt myndigheten krävs för färdigställande har vidtagits av myndigheten, eller ej.
torsdag 13 mars 2014
Against Measurement Against Copenhagen: For Rationality and Reality by Computation
John Bell's Against Measurement is a direct attack onto the heart of quantum mechanics as expressed in the Copenhagen Interpretation according to Bohr:
Bell poses the following questions:
• What exactly qualifies some physical systems to play the role of "measurer"?
• Was the wavefunction of the world waiting to jump for thousands of millions of years until a single-celled living creature appeared?
• Or did it have to wait a little longer, for some better qualified system…with a Ph D?
Physicists of today have no answers, with far-reaching consequences for all of science: If there is no rationality and reality in physics as the most rational and real of all sciences, then there can be no rationality and reality anywhere…If real physics is not about what is, then real physics is irrational and irreal…and then…any bubble can inflate to any size...
The story is well described by 1969 Nobel Laureate Murray Gell-Mann:
• Niels Bohr brainwashed a whole generation of theorists into thinking that the job of interpreting quantum theory was done 50 years ago.
But there is hope today, in digital simulation which offers observation without interference. Solving Schrödinger's equation by computation gives information about physical states without touching the physics. It opens a road to bring physics back to the rationality of 19th century physics in the quantum nano-world of today…without quantum computing...
Increasing Uncertainty about Heisenberg's Uncertainty Principle + Resolution
My mind was formed by studying philosophy, Plato and that sort of thing….The reality we can put into words is never reality itself…The atoms or elementary particles themselves are not real; they form a world of potentialities or possibilities rather than one of things or facts...If we omitted all that is unclear, we would probably be left completely uninteresting and trivial tautologies...
The 2012 article Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements by Lee A. Rozema et al, informs us:
• The Heisenberg Uncertainty Principle is one of the cornerstones of quantum mechanics.
• In his original paper on the subject, Heisenberg wrote “At the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum. This change is the greater the smaller the wavelength of the light employed, i.e., the more exact the determination of the position”.
• The modern version of the uncertainty principle proved in our textbooks today, however, deals not with the precision of a measurement and the disturbance it introduces, but with the intrinsic uncertainty any quantum state must possess, regardless of what measurement (if any) is performed.
• It has been shown that the original formulation is in fact mathematically incorrect.
OK, so we learn that Heisenberg's Uncertainty Principle (in its original formulation presumably) is a cornerstone of quantum physics, which however is mathematically incorrect, and that there is a modern version not concerned with measurement but with an intrinsic uncertainty of an quantum state regardless of measurement. In other words, a corner stone of quantum mechanics has been moved.
• The uncertainty principle (UP) occupies a peculiar position on physics. On the one hand, it is often regarded as the hallmark of quantum mechanics.
• On the other hand, there is still a great deal of discussion about what it actually says.
• A physicist will have much more difficulty in giving a precise formulation than in stating e.g. the principle of relativity (which is itself not easy).
• Moreover, the formulation given by various physicists will differ greatly not only in their wording but also in their meaning.
We learn that the uncertainty of the uncertainty principle has been steadily increasing ever since it was formulated by Heisenberg in 1927.
In a recent series of posts based on Computational Blackbody Radiation I have suggested a new approach to the uncertainty principle as a high-frequency cut-off condition of the form
where $\nu$ is frequency, $T$ temperature in Kelvin $K$ and $\hat h=4.8\times 10^{-11}Ks$ is a scaled Planck's constant, and the significance of the cut-off is that a body of temperature $T\, K$ cannot emit frequencies larger than $\frac{T}{h}$ because the wave synchronization required for emission is destroyed by internal friction damping these frequencies. The cut-off condition thus expresses Wien's displacement law.
The cut-off condition can alternatively be expressed as
where $u_\nu$ is amplitude and $\dot u_\nu =\frac{du_\nu}{dt}$ momentum of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$ and $\dot u_\nu =\nu u_\nu$. We see that the cut-off condition has superficially a form similar to Heisenberg's uncertainty principle, but that the meaning is entirely different and in fact familiar as Wien's displacement law.
We thus find that Heisenberg's uncertainty principle can be replaced by Wien's displacement law, which can be seen as an effect of internal friction preventing synchronization and thus emission of frequencies $\nu > \frac{T}{\hat h}$.
The high-frequency cut-off condition with its dependence on temperature is similar to high-frequency damping of a loud speaker which can depend on the level of the sound.
onsdag 12 mars 2014
Blackbody Radiation as Collective Vibration Synchronized by Resonance
There are two descriptions of the basic phenomenon of a radiation from a heated body (blackbody or greybody radiation) starting from a description of light as a stream of light particles named photons or as electromagnetic waves.
That the particle description of light is both primitive and unphysical was well understood before Einstein in 1905 suggested an explanation of the photoelectric effect based on light as a stream of particles later named photons, stimulated by Planck's derivation of Planck's law in 1900 based on radiation emitted in discrete quanta. However, with the development of quantum mechanics as a description of atomistic physics in the 1920s, the primitive and unphysical idea of light as a stream of particles was turned into a trademark of modern physics of highest insight.
The standpoint today is that light is both particle and wave, and the physicist is free to choose the description which best serves a given problem. In particular, the particle description is supposed to serve well to explain the physics of both blackbody radiation and photoelectricity. But since the particle description is primitive and unphysical, there must be something fishy about the idea that emission of radiation from a heated body results from emission of individual photons from individual atoms together forming a stream of photons leaving the body. We will return to the primitivism of this view after a study of the more educated idea of light as an (electromagnetic) wave phenomenon.
This more educated view is presented on Computational Blackbody Radiation with the following basic message:
1. Radiation is a collective phenomenon generated from in-phase oscillations of atoms in a structured web of atoms synchronized by resonance.
2. A radiating web of atoms acts like a system of tuning forks which tend to vibrate in phase as a result of resonance by acoustic waves. A radiating web of atoms acts like a swarm of cikadas singing in phase.
3. A radiating body has a high-frequency cut-off scaling with temperature of the form $\nu > \frac{T}{\hat h}$ with $\hat h = 4.8 \times 10^{-11}\, Ks$,where $\nu$ is frequency and $T$ temperature in degree Kelvin $K$, which translates to a wave-length $\lambda < \hat h\frac{c}{T}\, m$ as smallest correlation length for synchronization, where $c\, m/s$ is the speed of light. For $T =1500 K$ we get $\lambda \approx 10^{-5}\ m$ which is about 20 times the wave length of visible light.
We can now understand that the particle view is primitive because it is unable to explain that the outgoing radiation consists of electromagnetic waves which are in-phase. If single atoms are emitting single photons there is no mechanism ensuring that corresponding particles/waves are in-phase, and so a most essential element is missing.
The analysis of Computational Blackbody Radiation shows that an ideal blackbody is characterized as a body which is (i) not reflecting and (ii) has a maximal high frequency cut-off. It is observed that the emission from a hole in a cavity with graphite walls is a realization of a blackbody. This fact can be understood as an effect of the regular surface structure of graphite supporting collective atom oscillations synchronized by resonance on an atomic surface web of smallest mesh size $\sim 10^{-9}$. |
85d0df96cda17d0a |
Liverpool Preprint: LTH 370
7 May 1996
Chris Michael 111 presented at the Rencontre de Physique ‘Results and Perspectives in Particle Physics’, La Thuile, March 4 1996
Theoretical Physics, Dept. of Mathematical Sciences, University of Liverpool,
Liverpool L69 3BX, UK
The status of non-perturbative QCD calculations for mesons with gluonic excitation is presented. Lattice results for the glueball spectrum are reviewed. For hybrid mesons, the heavy quark results are summarised and new results are presented for light quarks. Preliminary results for the spectrum of light-quark hybrid mesons indicate substantial mixing with quark model states for non-exotic . For the exotic hybrid mesons, the , and states are explored.
1 Introduction
Since the advent of QCD as a theory of hadronic interactions, there have been experimental searches for unambiguous evidence of gluonic excitations in mesons. These searches need to be guided by theoretical input. The theoretical exploration involves non-perturbative methods and lattice QCD has become the most reliable tool. Here we review the status of glueball mass determinations from the lattice. The main aspect of topicality comes from a widely publicised claim [1] that the lattice work uniquely targets a particular experimental candidate. We discuss this claim and put it in context.
Another area which is promising for a study of gluonic excitations is that of hybrid mesons. These have a gluonic field in a non-trivial representation so that it is truly excited. We review lattice results for this spectrum for the case of heavy quarks. New results for light quarks are presented. These preliminary results give strong evidence for the splitting among the many possible hybrid meson states. The states with exotic quantum numbers (ie not allowed by the naive quark model: , and ) are studied and their spectrum is estimated.
2 Glueball Masses
The difficulty in isolating glueball candidates experimentally comes from the indirect methods that have to be used to deduce if a given resonance is composed primarily of gluons or of quarks. Lattice QCD allows the quark masses to be varied at will. In the simplest case, the quenched approximation, the dynamical quark mass is taken as large so that no quark loops are present in the vacuum. In this approximation, glueballs are stable and do not mix with quark - antiquark mesons. This approximation is very easy to implement in lattice studies: the full gluonic action is used but no quark terms are included. This corresponds to a full non-perturbative treatment of the gluonic degrees of freedom in the vacuum. A systematic lattice study of the neglected quark loop effects can be made in principle - though no comprehensive treatment has yet been made.
The glueball mass can be measured on a lattice through evaluating the correlation of two closed colour loops (called Wilson loops) at separation lattice spacings. This correlation has contributions from all glueballs of the given symmetry, with the ground state contribution dominating at large . In practice, sophisticated methods are used to choose loops such that the correlation is dominated by the ground state glueball. By using several different loops, a variational method can be used to achieve this effectively. Even so, it is worth keeping in mind that upper limits on the ground state mass are obtained in principle.
The method also needs to be tuned to take account of the many glueballs: with different and different momenta. On the lattice the Lorentz symmetry is reduced to that of a hypercube. Non-zero momentum sates can be created (momentum is discrete in units of where is the lattice spatial size). The usual relationship between energy and momentum is found for sufficiently small lattice spacing. Here we shall concentrate on the simplest case of zero momentum (obtained by summing the correlations over the whole spatial volume).
For a state at rest, the rotational symmetry becomes a cubic symmetry. The lattice states will transform under irreducible representations of this cubic symmetry group (called ). These irreducible representations can be linked to the representations of the full rotation group SU(2). Thus, for example, the five spin components of a state should be appear as the two-dimensional E and the three-dimensional T representations on the lattice, with degenerate masses. This degeneracy requirement then provides a test for the restoration of rotational invariance - which is expected to occur at sufficiently small lattice spacing.
Figure 1: The value of mass of the and glueball states from refs[2, 3, 4, 5] in units of . The restoration of rotational invariance is shown by the degeneracy of the and representations that make up the state: shown by octagons and diamonds respectively. The straight lines show fits describing the approach to the continuum limit as .
The results of lattice measurements [2, 3, 4, 5] of the and states are shown in fig 1. Since the lattice observables, such as the glueball mass , are not in physical units, it is necessary to form dimensionless ratios of lattice observables to compare with experiment. Fig 1 shows the dimensionless combination of the lattice glueball mass with a lattice quantity , which is a well measured quantity (given by at where is the lattice interquark potential at separation ) on the lattice that can be used to calibrate the lattice spacing and so explore the continuum limit. The quantity plotted, , is expected to be equal to the product of continuum quantities up to corrections of order . This behaviour near the continuum limit is indeed found as shown by the linear dependence of fig 1. The extrapolation to the continuum limit () can now be made with confidence. Note that older lattice data were only available at larger values of which explains why a smaller glueball mass was favoured at that time.
The lattice results in fig 1 from the UKQCD and GF11 groups have signals which are of comparable statistical significance and which are consistent with each other. However, their published values [4, 5] of the glueball mass are different (1550 versus 1740 MeV). The GF11 group chose to extrapolate to the continuum limit. This ratio has the disadvantage that there can be corrections both of order and of order while they assume in their extrapolation that only order effects are significant. They determine from their own results for which yields a mass of MeV leading them to claim [5, 1] that the meson is a preferred glueball candidate. Their error estimate on the glueball mass does not take into account fully the systematic errors in the extraction of the continuum limit or those due to quenching.
Using instead the best determined continuum quantity from the lattice results, we need to determine a physical value for . From the interquark potential as determined in spectroscopy, the value of in physical units is about 0.5 fm and we will adopt a scale equivalent to GeV. This information yields lattice predictions for the glueball masses based on all lattice data of around 1.6 GeV and 2.2 GeV for the and glueballs respectively. Setting the scale in a quenched lattice calculation is inherently imprecise because ratios of lattice observables are found to disagree with experimental values (unquenched) by different amounts for different ratios. Thus no common scale determination is possible for the quenched lattice. It is prudent to assign a systematic error of at least 10% to the scale. Since this dominates the statistical error, the conservative conclusion is a glueball mass of MeV. This is an energy range consistent with promising experimental glueball candidates such as - for a review see ref[7]. A candidate for a glueball at 2230 MeV has also been reported recently [7].
Figure 2: The mass of the glueball states with quantum numbers from ref[4]. The scale is set by GeV which yields the right hand scale in GeV. The solid points represent mass determinations whereas the open points are upper limits.
The predictions for the other states are that they lie higher in mass and the present state of knowledge is summarised in fig 2. Note that the lattice gives a clear indication that no light pseudoscalar glueball should exist. Remember that the lattice results are strictly upper limits. For the values not shown, these upper limits are too weak to be of use.
Since quark - antiquark mesons can only have certain values, it is of special interest to look for glueballs with values not allowed for such mesons: etc. Such spin-exotic states, often called “oddballs”, would not mix directly with quark - antiquark mesons. This would make them a very clear experimental signal of the underlying glue dynamics. Various glueball models (bag models, flux tube models, QCD sum-rule inspired models,…) gave different predictions for the presence of such oddballs (eg. ) at relatively low masses. The lattice mass spectra clarify these uncertainties but, unfortunately for experimentalists, do not indicate any low-lying oddball candidates. The lightest candidate is from the T spin combination. Such a state could correspond to an oddball. Another interpretation is also possible, however, namely that a non-exotic state is responsible (this choice of interpretation can be resolved in principle by finding the degenerate 5 or 7 states of a or 3 meson). The overall conclusion at present is that there is no evidence for any oddballs of mass less than 3 GeV.
Glueballs are defined in the quenched approximation and, hence, they do not decay into mesons since that would require quark - antiquark creation. It is, nevertheless, still possible to estimate the strength of the matrix element between a glueball and a pair of mesons within the quenched approximation. For the glueball to be a relatively narrow state, this matrix element must be small. A very preliminary attempt [6, 1] has been made to estimate the size of the coupling of the glueball to two pseudoscalar mesons. A relatively small value is found. Furthermore they see indications for a dependence on the pseudoscalar mass of the reduced decay matrix element. These conclusions imply that the quenched glueball mass determination was of relevance to the experimental situation since the mixing with other mesons would be small. Further work needs to be done to investigate this in more detail, in particular to study the mixing between the glueball and mesons since this mixing may be an important factor in the decay process.
In principle, it is possible to study on a lattice the glueball spectrum in full QCD vacua with sea quarks of mass . For large , the result is just the quenched result described above. For equal to the experimental light quark masses, the results should just reproduce the experimental meson spectrum - with the resultant uncertainty between glueball interpretations and other interpretations. The lattice enables these uncertainties to be resolved in principle: one obtains the spectrum for a range of values of between these limiting cases, so mapping glueball states at large to the experimental spectrum at light .
3 Hybrid Mesons
Figure 3: The lattice static quark potential for the ground state and first excited state from ref [9] with the scale given by a lattice spacing corresponding to GeV. The energy difference between the excited potential and the ground state is seen to be well approximated [9] by a string model expression ( as shown by the continuous line). Also shown are some of the lower lying states in these potentials obtained from the Schrödinger equation in the adiabatic approximation. The lattice potentials share a common self-energy so that the energy difference between the lowest hybrid level and the meson is determined directly (1.36 GeV). The dotted curve shows the modification to the quenched lattice ground-state potential needed to give the experimental spectrum. This gives an estimate of the systematic error from quenching.
In order to set the scene for the study of mesons with gluonic excitations, it is worthwhile to summarise briefly the simple constituent quark model. This model of massive quark and antiquark bound by a potential is only justified theoretically for and to a lesser extent , but it is still useful guide for light quark states. The mesonic states that can be made from with spatial wavefunction with orbital angular momentum and total spin or have values of
Since the gluon can introduce no flavour quantum numbers, the assignments will be of importance. Of special interest in the following will be the absence of certain values in the above list. These states are known as spin-exotic and include .
We define a hybrid meson as a system with additional gluonic excitation. The definition of a hybrid meson is less clear than for a glueball since even the basic quark model mesons have a gluonic component which is responsible for the binding force. So we must establish that the gluonic component is excited before labelling a state as a hybrid meson. This is straightforward for the case of static quarks at separation . The ground state potential will then have cylindrical symmetry about the interquark axis while less symmetric configurations correspond to various excitations of the gluonic flux joining the sources. A pioneering study using lattice techniques [8] found that the first excited gluonic state arises from transverse gluonic flux excitations of the form . Such spatial excitations of the distribution of the colour flux from quark to antiquark correspond to gluonic fields with about the interquark axis and so are clearly hybrid states.
In molecular physics, it is common to assume that the electronic degrees of freedom adjust themselves with a much shorter timescale than that of the rotation of the molecule as a whole - this is the adiabatic approximation. For hybrid mesons, this will be valid if the gluonic degrees of freedom have a much shorter time-scale than those associated with the quarks. This will be a plausible approach since we find gluonic excitations with energies exceeding 1 GeV while quark model excitations (orbital and radial) have smaller energies (of a few hundred MeV). Then the allowed values of hybrid mesons bound in this excited gluonic potential can be easily determined within this adiabatic approximation using the Schrödinger equation. The lowest lying hybrid states are found [8] to have values arising from two spatial symmetries
The first group corresponds to the states accessible from a “magnetic gluon” excitation with spatial symmetry while the second group are from an “electric gluon” with . The lattice determination [9] of the ground state and excited potentials is illustrated in fig 3 for the quenched case. At large interquark separation , a hadronic string picture is expected to be a reasonable model and we see that the string model excitation energy ( in the simplest version - but see ref [9]) gives a good description. Also shown is the spectrum of mesons in these potentials obtained in the abiabatic approximation for quarks. Although the absolute value of the bound state energy is not accessible because of lattice self-energy effects, the difference between energies of bound states in the ground and excited potential is completely predicted. The hybrid level shown is the lowest such level and has the above eight degenerate values. Thus there will be mesons with exotic quantum numbers at this energy which provide a prediction that can be checked by experiment.
The ground state potential in the quenched approximation does not correctly reproduce the experimental spectrum. The simplest explanation is that in full QCD the short distance (Coulombic) component of the potential would be enhanced (by at leading order in perturbation theory) and such an enhancement is illustrated in fig 3 by the dotted ground state potential that does indeed reproduce the experimental spectrum. Using this prescription, but taking into account uncertainties from different approaches to modifying the quenched approximation, the lattice prediction [9] is for the lightest hybrid meson excitation to be at 4.19(15) GeV for and 10.81(25) GeV for . These energy values lie above the open and thresholds. An alternative procedure, within the quenched approximation, is to focus [10] on the energy difference between the hybrid meson and the threshold. This suggests that the lowest hybrid level may lie below the threshold.
These lattice predictions do not take account of splitting of the degeneracy of the hybrid levels due to spin-spin and spin-orbit effects. Indeed, going beyond the adiabatic approximation, the non-exotic hybrid mesons can mix with states in the usual -excited quark model and may thus be modified substantially. Evidence also exists from lattice studies [11] at small separation that the excitation is a few hundred MeV lighter than the excitation. This would imply that the degenerate hybrid levels would be split with the lightest exotic state having . The experimental detection of such states depends crucially on whether they lie above or below the open quark threshold. The quenched lattice estimate [9] shows that, for both and quarks, the lightest hybrid levels lie above threshold. The uncertainties due to the level splitting effects described above, combined with the uncertainty in interpretation of the quenched spectrum, both point to the possibility that a narrow spin exotic hybrid meson could exist close to the open quark threshold. It is important to search carefully for such states.
Figure 4: The lattice effective mass for the hybrid meson versus time separation . The source used was a U-shaped path of size while the sinks were combinations of U-shaped paths of size (diamonds), (crosses) and (squares). The fit shown has a ground state mass of 0.98(26) in lattice units at with tadpole-improved clover action for hopping parameter . This corresponds to mesons made of strange quarks so .
Figure 5: Preliminary results for the ordering of the hybrid meson levels for strange quarks [12]. The states with burst symbols are exotic. The dashed lines represent -excited quark model states as determined on the lattice. The strong mixing of the states created by our hybrid operators with these is apparent for the pseuodoscalar and vector meson cases.
In principle the lattice approach allows a study of hybrid mesons formed from light (, and ) quarks. Preliminary results have recently been obtained [12] by the UKQCD collaboration. The method builds on the experience with static quarks and uses operators to create hybrid mesons in which the quark and antiquark are joined by colour flux which is excited in the transverse plane as . Excitations of this kind are clearly non-trivial gluonic contributions and the mesonic states in such excited potentials include exotic values. The lattice analysis is a fully relativistic analysis of propagating mesons. The approximations used are that of the quenched approximation for the vacuum and, in this preliminary study, we used a light quark mass corresponding to the strange quark (since lighter quarks are computationally more demanding). From the results we will be able to explore the splitting among hybrid meson levels as well as the mixing between non-exotic hybrid mesons and mesons.
Preliminary results [12] come from 70 lattices of size at which is only a small fraction of the eventual statistical sample. We used a SW-clover fermionic action with clover coefficient and hopping parameter . To establish our methods, we have studied the -excited quark model mesons for S, P and D waves and successfully determined their energy levels. The signal for the hybrid mesons is weaker and our present data sample does not give precise estimates of the hybrid meson masses. For example, our results for the meson are shown by the fit in fig 4 and are consistent with a mass ratio which shows the large errors remaining. What is somewhat better determined, however, are the splitting effects. We see in fig 5 significant mixing of non-exotic hybrids created using our hybrid operators with mesons of the same quantum numbers. We intend to explore this more fully in future. We find that the exotic hybrids are all at comparable masses with the state with spatial excitation slightly lower in mass so that the lightest exotic hybrid meson would have .
The preliminary study presented here was conducted using -quarks. Quenched lattice studies of the QCD spectrum suggest that quark mass effects in the meson mass (or mass squared) are well described by a term linear in the quark mass. Experimental meson masses are consistent with the ansatz that GeV where means or . This suggests that the , mesons will be around 160 MeV lighter than the mesons for masses around 1.5 GeV. Note that these lattice studies are for mesons made from two quarks of equal mass which are thus eigenstates of . For unequal masses (eg. strange mesons) the lack of invariance masks the identification of spin exotic states.
There have been several experimental claims for hybrid mesons - for reviews see refs[7, 13]. Our results suggest that non spin-exotic candidates may need re-appraisal since big mixing effects are possible. For the exotic mesons, the favoured candidate to lie lowest will have and several experimental hints of such states have been reported.
4 Conclusions
We have summarised quenched lattice predictions for glueballs and hybrid mesons. Recent developments include an estimate of glueball decay widths and a first study of light quark hybrid mesons. The study of the light quark hybrid mesons with considerable greater statistics is currently under way. The qualitative features from such predictions are an essential guide to the experimental exploration of such mesons. Lattice studies with dynamical quarks will enable better control of the systematic error from quenching - this is also in progress.
5 Acknowledgements
I acknowledge the contributions made by my colleagues in the UKQCD collaboration, especially Pierre Lacock, to the study of hybrid mesons with light quarks.
• [1] J. Sexton, A. Vaccarino and D. Weingarten, Phys.Rev.Lett.75 (1995) 4563.
• [2] P. De Forcrand, et al., Phys. Lett. 152B (1985) 107
• [3] C. Michael and M. Teper, Nucl. Phys. B314 (1989) 347
• [4] UKQCD collaboration, G. Bali, K. Schilling, A. Hulsebos, A. C. Irving, C. Michael and P. Stephenson, Phys. Lett. B309 (1993) 378-84.
• [5] H. Chen, J. Sexton, A. Vaccarino and D. Weingarten, Nucl. Phys. B (Proc. Suppl.) 34 (1994) 357.
• [6] J. Sexton, A. Vaccarino and D. Weingarten, Nucl. Phys. B (Proc. Suppl.) 42 (1995) 27
• [7] F. E. Close, Proc. Hadron Conference, Manchester 1995; hep-ph/9509245.
• [8] L. A. Griffiths, P. E. L. Rakow and C. Michael, Phys. Lett. B 129 (1983) 351.
• [9] S. Perantonis and C. Michael, Nucl. Phys. B347 (1990) 854.
• [10] R. Sommer, heplat/9401037 (to be published in Phys. Rep.).
• [11] I. H. Jorysz and C. Michael, Nucl. Phys. B 302 (1987) 448.
• [12] P. Lacock, C. Michael et al., UKQCD Collaboration (in preparation).
• [13] S. U. Chung, These proceedings.
|
41a7368a5e3d8ffe | Springe direkt zu Inhalt
Complete List of Papers with Abstract
1. Subhas Ghosal, Richard J. Doyle, Christiane P. Koch, and Jeremy M. Hutson
Stimulating the production of deeply bound RbCs molecules with laser pulses: the role of spin-orbit coupling in forming ultracold molecules
accepted for publication in New J. Phys. 11 (2009)
We investigate the possibility of forming deeply bound ultracold RbCs molecules by a two-color photoassociation experiment. We compare the results with those for Rb2 in order to understand the characteristic differences between heteronuclear and homonuclear molecules. The major differences arise from the different long-range potential for excited states. Ultracold 85Rb and 133Cs atoms colliding on the X1Sigma+ potential curve are initially photoassociated to form excited RbCs molecules in the region below the Rb(5S) + Cs(6P1/2) asymptote. We explore the nature of the Ω=0+ levels in this region, which have mixed A1Σ+ and b3Π character. We then study the quantum dynamics of RbCs by a time-dependent wavepacket (TDWP) approach. A wavepacket is formed by exciting a few vibronic levels and is allowed to propagate on the coupled electronic potential energy curves. For a detuning of 7.5 cm-1, the wavepacket for RbCs reaches the short-range region in about 13 ps, which is significantly faster than for the homonuclear Rb2 system; this is mostly because of the absence of an R-3 long-range tail in the excited-state potential curves for heteronuclear systems. We give a simple semiclassical formula that relates the time taken to the long-range potential parameters. For RbCs, in contrast to Rb2, the excited-state wavepacket shows a substantial peak in singlet density near the inner turning point, and this produces a significant probability of deexcitation to form ground-state molecules bound by up to 1500 cm-1. Our analysis of the role of spin-orbit coupling concerns the character of the mixed states in general and is important for both photoassociation and stimulated Raman deexcitation.
pdf (720 KB)
2. Christiane P. Koch, Mamadou Ndong and Ronnie Kosloff
Two-photon coherent control of femtosecond photoassociation
accepted for publication in Faraday Discussions 142
Photoassociation with short laser pulses has been proposed as a technique to create ultracold ground state molecules. A broad-band excitation seems the natural choice to drive the series of excitation and deexcitation steps required to form a molecule in its vibronic ground state from two scattering atoms. First attempts at femtosecond photoassociation were, however, hampered by the requirement to eliminate the atomic excitation leading to trap depletion. On the other hand, molecular levels very close to the atomic transition are to be excited. The broad bandwidth of a femtosecond laser then appears to be rather an obstacle. To overcome the ostensible conflict of driving a narrow transition by a broad-band laser, we suggest a two-photon photoassociation scheme. In the weak-field regime, a spectral phase pattern can be employed to eliminate the atomic line. When the excitation is carried out by more than one photon, different pathways in the field can be interfered constructively or destructively. In the strong-field regime, a temporal phase can be applied to control dynamic Stark shifts. The atomic transition is suppressed by choosing a phase which keeps the levels out of resonance. We derive analytical solutions for atomic two-photon dark states in both the weak-field and strong-field regime. Two-photon excitation may thus pave the way toward coherent control of photoassociation. Ultimately, the success of such a scheme will depend on the details of the excited electronic states and transition dipole moments. We explore the possibility of two-photon femtosecond photoassociation for alkali and alkaline-earth metal dimers and present a detailed study for the example of calcium.
Get pdf (310 KB)
3. Mamadou Ndong, Hillel Tal-Ezer, Ronnie Kosloff, and Christiane P. Koch
A Chebychev propagator for inhomogeneous Schrödinger equations
J. Chem. Phys. 130, 124108 (2009) (arXiv:0812.4428)
We present a propagation scheme for time-dependent inhomogeneous Schrödinger equations which occur for example in optimal control theory or in reactive scattering calculations. A formal solution based on a polynomial expansion of the inhomogeneous term is derived. It is subjected to an approximation in terms of Chebychev polynomials. Different variants for the inhomogeneous propagator are demonstrated and applied to two examples from optimal control theory. Convergence behavior and numerical efficiency are analyzed.
pdf (670 KB)
4. Christiane P. Koch
Perspectives for coherent optical formation of strontium molecules in their electronic ground state
Phys. Rev. A 78, 063411 (2008) (arXiv:0811.0015)
Optical Feshbach resonances [Phys. Rev. Lett. 94, 193001 (2005)] and pump-dump photoassociation with short laser pulses [Phys. Rev. A 73, 033408 (2006)] have been proposed as means to coherently form stable ultracold alkali dimer molecules. In an optical Feshbach resonance, the intensity and possibly frequency of a cw laser are ramped up linearly followed by a sudden switch-off of the laser. This is applicable to tightly trapped atom pairs. In short-pulse photoassociation, the pump pulse forms a wave-packet in an electronically excited state. The ensuing dynamics carry the wave-packet to shorter internuclear distances where, after half a vibrational period, it can be deexcited to the electronic ground state by the dump pulse. Short-pulse photoassociation is suited for both shallow and tight traps. The applicability of these two means to produce ultracold molecules is investigated here for 88Sr. Dipole-allowed transitions proceeding via the B1Σu+ excited state as well as transitions near the intercombination line are studied.
Get pdf (617 KB)
5. Christiane P. Koch and Robert Moszynski
Engineering an all-optical route to ultracold molecules in their vibronic ground state
Phys. Rev. A 78, 043417 (2008) (arXiv:0810.0179)
We propose an improved photoassociation scheme to produce ultracold molecules in their vibronic ground state for the generic case where non-adiabatic effects facilitating transfer to deeply bound levels are absent. Formation of molecules is achieved by short laser pulses in a Raman-like pump-dump process where an additional near-infrared laser field couples the excited state to an auxiliary state. The coupling due to the additional field effectively changes the shape of the excited state potential and allows for efficient population transfer to low-lying vibrational levels of the electronic ground state. Repetition of many pump-dump sequences together with collisional relaxation allows for accumulation of molecules in v=0.
Get pdf (380 KB)
6. José P. Palao, Ronnie Kosloff, and Christiane P. Koch
Protecting coherence in Optimal Control Theory: State dependent constraint approach
Phys. Rev. A 77, 063412 (2008) (arXiv:0707.2401)
Optimal control theory is developed for the task of obtaining a primary objective in a subspace of the Hilbert space while avoiding other subspaces of the Hilbert space. The primary objective can be a state-to-state transition or a unitary transformation. A new optimization functional is introduced which leads to monotonic convergence of the algorithm. This approach becomes necessary for molecular systems subject to processes implying loss of coherence such as predissociation or ionization. In these subspaces controllability is hampered or even completely lost. Avoiding the lossy channels is achieved via a functional constraint which depends on the state of the system at each instant in time. We outline the resulting new algorithm, discuss its convergence properties and demonstrate its functionality for the example of a state-to-state transition and of a unitary transformation for a model of cold Rb2.
Get pdf (712 KB)
7. H. K. Pechkis, D. Wang, Y. Huang, E. E. Eyler, P. L. Gould, W. C. Stwalley, C. P. Koch
Enhancement of the formation of ultracold 85Rb2 molecules due to resonant coupling
Phys. Rev. A 76, 022504 (2007) (arXiv:0707.2401)
We have studied the effect of resonant electronic state coupling on the formation of ultracold ground-state 85Rb2. Ultracold Rb2 molecules are formed by photoassociation (PA) to a coupled pair of 0u+ states, 0u+(P1/2) and 0u+(P3/2), in the region below the 5S+5P1/2 limit. Subsequent radiative decay produces high vibrational levels of the ground state, X1Σg+. The population distribution of these X state vibrational levels is monitored by resonance-enhanced two-photon ionization through the 21Σu+ state. We find that the populations of vibrational levels v''=112-116 are far larger than can be accounted for by the Franck-Condon factors for 0u+(P1/2) → X1Σg+ transitions with the 0u+(P1/2) state treated as a single channel. Further, the ground-state molecule population exhibits oscillatory behavior as the PA laser is tuned through a succession of 0u+ state vibrational levels. Both of these effects are explained by a new calculation of transition amplitudes that includes the resonant character of the spin-orbit coupling of the two 0u+ states. The resulting enhancement of more deeply bound ground-state molecule formation will be useful for future experiments on ultracold molecules.
Get pdf (290 KB)
8. Christiane P. Koch, Ronnie Kosloff, Eliane Luc-Koenig, Françoise Masnou-Seeuws, and Anne Crubellier
Photoassociation with chirped laser pulses : Calculation of the absolute number of molecules per pulse
J. Phys. B 39, S1017 (2006)
The total number of molecules produced in a pulsed photoassociation of ultracold atoms is a crucial link between theory and experiment. A calculation based on first principles can determine the experimental feasibility of a pulsed photoassociation scheme. The calculation method considers an initial thermal ensemble of atoms. This ensemble is first decomposed into a representation of partial spherical waves. The photoassociation dynamics is calculated by solving the multichannel time-dependent Schrödinger equation on a mapped grid. The molecules are primarily assembled in a finite region of internuclear distances, the 'photoassociation window'. The ensemble average was calculated by adding the contributions from initial scattering states confined to a finite volume. These states are Boltzmann averaged where the partition function is summed numerically. Convergence is obtained for sufficiently large volume. The results are compared to a thermal averaging procedure based on scaling laws which leads to a single representative initial partial wave which is sufficient to represent the density in the 'photoassociation window'. For completeness a third high-temperature thermal averaging procedure is described which is based on random phase thermal Gaussian initial states. The absolute number of molecules in the two first calculation methods agree to within experimental error for photoassociation with picosecond pulses for a thermal ensemble of rubidium or caesium atoms in ultracold conditions.
Get pdf (600 KB)
9. Ulrich Poschinger, Wenzel Salzmann, Roland Wester, Matthias Weidemüller, Christiane P. Koch, Ronnie Kosloff
Theoretical model for ultracold molecule formation via adaptive feedback control
J. Phys. B 39, S1001 (2006) (physics/0604140)
We investigate pump-dump photoassociation of ultracold molecules with amplitude- and phase-modulated femtosecond laser pulses. For this purpose a perturbative model for the light-matter interaction is developed and combined with a genetic algorithm for adaptive feedback control of the laser pulse shapes. The model is applied to the formation of 85Rb2 molecules in a magneto-optical trap. We find for optimized pulse shapes an improvement for the formation of ground state molecules by more than a factor of 10 compared to unshaped pulses at the same pump-dump delay time, and by 40% compared to unshaped pulses at the respective optimal pump-dump delay time. Since our model yields directly the spectral amplitudes and phases of the optimized pulses, the results are directly applicable in pulse shaping experiments.
Get pdf (441 KB)
10. Christiane P. Koch, Ronnie Kosloff, Françoise Masnou-Seeuws
Short-pulse photoassociation in rubidium below the D1 line
Phys. Rev. A 73, 043409 (2006) (physics/0511235)
Photoassociation of two ultracold rubidium atoms and the subsequent ground state molecule formation is investigated theoretically. The method employs laser pulses inducing transitions via excited states correlated to the 5S+5P1/2 asymptote. Weakly bound ground state molecules can be created by a single pulse while the formation of more deeply bound molecules requires a two-color pump-dump scenario. Deeply bound ground state molecules can be produced only if efficient mechanisms for both pump and dump steps exist. While long-range 1/R3-potentials allow for efficient photoassociation, stabilization is facilitated by the resonant spin-orbit coupling of the 0u+ states. Molecules in the singlet ground state bound by a few wave numbers can thus be formed. This provides a promising first step toward ground state molecules which are ultracold in both translational and vibrational degrees of freedom.
Get pdf (1.4 MB)
11. Christiane P. Koch, Eliane Luc-Koenig, Françoise Masnou-Seeuws
Making ultracold molecules in a two color pump-dump photoassociation scheme using chirped pulses
Phys. Rev. A 73, 033408 (2006) (physics/0508090)
This theoretical paper investigates the formation of ground state molecules from ultracold cesium atoms in a two-color scheme. Following previous work on photoassociation with chirped picosecond pulses [Luc-Koenig et al., Phys. Rev. A 70, 033414 (2004)], we investigate stabilization by a second (dump) pulse. By appropriately choosing the dump pulse parameters and time delay with respect to the photoassociation pulse, we show that a large number of deeply bound molecules are created in the ground triplet state. We discuss (i) broad-bandwidth dump pulses which maximize the probability to form molecules while creating a broad vibrational distribution as well as (ii) narrow-bandwidth pulses populating a single vibrational ground state level, bound by 113 cm-1. The use of chirped pulses makes the two-color scheme robust, simple and efficient.
Get pdf (1.4 MB)
12. Sören Dittrich, Hans-Joachim Freund, Christiane P. Koch, Ronnie Kosloff, and Thorsten Klüner
Two-dimensional surrogate Hamiltonian investigation of laser-induced desorption of NO/NiO(100)
J. Chem. Phys. 124, 024702 (2006)
The photodesorption of NO from NiO(100) is studied from first principles, with electronic relaxation treated by the use of the surrogate Hamiltonian approach. Two nuclear degrees of freedom of the adsorbate-substrate system are taken into account. To perform the quantum dynamical wave-packet calculations, a massively parallel implementation with a one-dimensional data decomposition had to be introduced. The calculated desorption probabilities and velocity distributions are in qualitative agreement with experimental data. The results are compared to those of stochastic wave-packet calculations where a sufficiently large number of quantum trajectories is propagated within a jumping wave-packet scenario.
Get pdf (225 kB)
13. Christiane P. Koch, Françoise Masnou-Seeuws, Ronnie Kosloff
Creating Ground State Molecules with Optical Feshbach Resonances in Tight Traps
Phys. Rev. Lett. 94, 193001 (2005) (quant-ph/0412166)
We propose to create ultracold ground state molecules in an atomic Bose-Einstein condensate by adiabatic crossing of an optical Feshbach resonance. We envision a scheme where the laser intensity and possibly also frequency are linearly ramped over the resonance. Our calculations for 87Rb show that for sufficiently tight traps it is possible to avoid spontaneous emission while retaining adiabaticity, and conversion efficiencies of up to 50% can be expected.
Get pdf (982 kB) or gzipped PS (4.24 MB)
14. Christiane P. Koch, José P. Palao, Ronnie Kosloff, Françoise Masnou-Seeuws
Stabilization of Ultracold Molecules Using Optimal Control Theory
Phys. Rev. A 70, 013402 (2004) (quant-ph/0402066)
In recent experiments on ultracold matter, molecules have been produced from ultracold atoms by photoassociation, Feshbach resonances, and three-body recombination. The created molecules are translationally cold, but vibrationally highly excited. This will eventually lead them to be lost from the trap due to collisions. We propose shaped laser pulses to transfer these highly excited molecules to their ground vibrational level. Optimal control theory is employed to find the light field that will carry out this task with minimum intensity. We present results for the sodium dimer. The final target can be reached to within 99% provided the initial guess field is physically motivated. We find that the optimal fields contain the transition frequencies required by a good Franck-Condon pumping scheme. The analysis identifies the ranges of intensity and pulse duration which are able to achieve this task before any other competing processes take place. Such a scheme could produce stable ultracold molecular samples or even stable molecular Bose-Einstein condensates.
Get pdf (700 kB) or gzipped PS (1.2 MB)
15. David Gelman, Christiane P. Koch, Ronnie Kosloff
Dissipative quantum dynamics with the Surrogate Hamiltonian approach. A comparison between spin and harmonic baths
J. Chem. Phys. 121, 661 (2004) (quant-ph/0402144)
The dissipative quantum dynamics of an anharmonic oscillator coupled to a bath is studied with the purpose of elucidating the differences between the relaxation to a spin bath and to a harmonic bath. Converged results are obtained for the spin bath by the surrogate Hamiltonian approach. This method is based on constructing a system?bath Hamiltonian, with a finite but large number of spin bath modes, that mimics exactly a bath with an infinite number of modes for a finite time interval. Convergence with respect to the number of simultaneous excitations of bath modes can be checked. The results are compared to calculations that include a finite number of harmonic modes carried out by using the multiconfiguration time-dependent Hartree method of Nest and Meyer [J. Chem. Phys. 119, 24 (2003)]. In the weak coupling regime, at zero temperature and for small excitations of the primary system, both methods converge to the Markovian limit. When initially the primary system is significantly excited, the spin bath can saturate restricting the energy acceptance. An interaction term between bath modes that spreads the excitation eliminates the saturation. The loss of phase between two cat states has been analyzed and the results for the spin and harmonic baths are almost identical. For stronger couplings, the dynamics induced by the two types of baths deviate. The accumulation and degree of entanglement between the bath modes have been characterized. Only in the spin bath the dynamics generate entanglement between the bath modes.
Get pdf (900 kB) or gzipped PS (900 kB)
16. Christiane P. Koch, Thorsten Klüner, Hans-Joachim Freund and Ronnie Kosloff
Surrogate Hamiltonian Study of of electronic relaxation in the femtosecond laser induced desorption of NO/NiO(100)
J. Chem. Phys. 119, 1750-1765 (2003)
A microscopic model for electronic quenching in the photodesorption of NO from NiO(100) is developed. The quenching is caused by the interaction of the excited adsorbate-substrate complex with electron hole pairs (O2p->Ni3d states) in the surface. The electron hole pairs are described as a bath of two level systems (TLS) which are characterized by an excitation energy and a dipole charge. The parameters are connected to estimates from photoemission spectroscopy and configuration interaction (CI) calculations. Due to the localized electronic structure of NiO a direct optical excitation mechanism can be assumed, and a reliable potential energy surface for the excited state is available. Thus a treatment of all steps in the photodesorption event from first principles becomes possible for the first time. The Surrogate Hamiltonian method which allows to monitor convergence is employed to calculate the desorption dynamics. Desorption probabilities of the right order of magnitude and velocities in the experimentally observed range are obtained.
Get pdf (1.1 MB) or gzipped PS (1.1 MB)
17. Christiane P. Koch, Thorsten Klüner, Hans-Joachim Freund and Ronnie Kosloff
Femtosecond Photodesorption of Small Molecules from Surfaces: A Theoretical Investigation from First Principles
Phys. Rev. Lett. 90, 117601 (2003)
A microscopic model for the excitation and relaxation processes in photochemistry at surfaces is developed. Our study is based on ab initio calculations and the surrogate Hamiltonian method treating surface electron-hole pairs as a bath of two-level systems. Desorption probabilities and velocities in the experimentally observed range are obtained. The excited state lifetime is calculated, and a dependence of observables on pulse length is predicted.
Get pdf (112 kB) or gzipped PS (117 kB)
18. Christiane P. Koch, Thorsten Klüner and Ronnie Kosloff
A complete quantum description of an ultrafast pump-probe charge transfer event in condensed phase
J. Chem. Phys. 116, 7983-7996 (2002)
An ultrafast photoinduced charge transfer event in condensed phase is simulated. The interaction with the field is treated explicitly within a time-dependent framework. The description of the interaction of the system with its environment is based on the Surrogate Hamiltonian method where the infinite number of degrees of freedom of the environment is approximated by a finite set of two-level modes for a limited time. This method is well suited to ultrafast events, since it is not limited by weak coupling between system and environment. Moreover, the influence of the external field on the system-bath coupling is included naturally. The Surrogate Hamiltonian method is generalized to incorporate two electronic states including all possible system-bath interactions. The method is applied to a description of a pump-probe experiment where every step of the cycle is treated consistently. Dynamical variables are considered which go beyond rates of charge transfer such as the transient absorption spectrum. The parameters of the model are chosen to mimic the mixed valence system (NH3)5RuNCRu(CN)5-.
Get pdf (618 kB) or gzipped PS (594 kB)
19. Christiane Koch and Bernd Esser
Spin-boson Hamiltonian and optical absorption of molecular dimers
Phys. Rev. A 61, 022508 (2000) (quant-ph/9911042)
An analysis of the eigenstates of a symmetry-broken spin-boson Hamiltonian is performed by computing Bloch and Husimi projections. The eigenstate analysis is combined with the calculation of absorption bands of asymmetric dimer configurations constituted by monomers with nonidentical excitation energies and optical transition matrix elements. Absorption bands with regular and irregular fine structures are obtained and related to the transition from the coexistence to a mixing of adiabatic branches in the spectrum. It is shown that correlations between spin states allow for an interpolation between absorption bands for different optical asymmetries.
Get pdf (365 kB) or gzipped PS (1.7 MB)
20. Christiane Koch and Bernd Esser
Spectrum, lifetime distributions and relaxation in a dimer with strong excitonic-vibronic coupling
J. Lumin. 81 (1999) 171-181
The fine structure of the complex quantum spectrum of a dimer constituted by monomers with a finite lifetime in the excited states and a strong excitonic-vibronic coupling has been investigated in detail. Lifetime distributions of the spectrum are analysed for different system parameter sets. It is shown that in case of an asymmetric configuration the spectrum may be characterised by a broad distribution of the lifetimes of the eigenstates. This can give rise to a strongly varying relaxation behaviour, which is due to the mixing of the monomer spectra with two different excitonic lifetimes in the dimer spectrum.
Get pdf (433 kB)
Conference proceedings
Book contributions
• A. Lindinger, V. Bonačić-Koutecký, R. Mitrić, D. Tannor, C. P. Koch, V. Engel, T. M. Bernhardt, J. Jortner, A. Mirabal, L. Wöste.
Analysis and control of small isolated molecular systems
In: Analysis and control of ultrafast photoinduced reactions. Springer Series in Chemical Physics Vol. 87
Eds. O. Kühn and L. Wöste, Springer Berlin 2007.
• Christiane P. Koch, David Gelman, Ronnie Kosloff, Thorsten Klüner.
Irreversibilität in Quantensystemen mittels der Methode des Surrogate Hamiltonian
In: Physik Irreversibler Prozesse und Selbstorganisation. Eds. T. Pöschel, L. Schimansky-Geier und H. Malchow. Logos Verlag Berlin 2006.
Die Methode des Surrogate Hamiltonian stellt einen neuartigen Ansatz dar, quantendissipative Systeme zu behandeln: Es wird ein 'Ersatz'-Hamilton-Operator konstruiert, der für begrenzte Zeiten dieselbe Dynamik generiert wie der echte Hamilton-Operator. Die dissipative Zeitentwicklung erhält man dann über die Lösung der zeitabhängigen Schrödingergleichung für das Gesamtsystem und anschliessender Spurbildung über die Umgebungsfreiheitsgrade, d.h. eine Dichtematrixpropagation wird vermieden. Einfache Beispiele zur Illustration der Methode werden vorgestellt und weitergehende Anwendungen diskutiert.
Get pdf (1.5 MB) |
42053ba66bf4627c | Vol. 42 No. 4 - Highlights
Electroweak model without a Higgs particle (Vol. 42, No. 4)
Thanks to the great accuracy in predicting experimental data, the standard model of particle physics is widely considered to be a building block of our current knowledge of the structure of matter. In spite of this success, we are still lacking an essential piece of evidence, namely the detection of the Higgs boson, a hypothetical massive elementary particle whose existence makes it possible to explain how most of the known elementary particles become massive. In this paper, an alternative electroweak model is presented that assumes running coupling constants described by energy-dependent entire functions. Contrary to the conventional formulation the action contains no physical scalar fields and no Higgs particle, even if the foreseen masses for particles are compatible with known experimental values. In addition the vertex couplings possess an energy scale for predicting scattering amplitudes that can be tested in current particle accelerators. As a result the paper provides an essential alternative to the current established knowledge in the field and addresses an issue that might soon be resolved, as the Large Hadron Collider could provide the experimental evidence of the existence or non-existence of the Higgs boson.
Ultraviolet complete electroweak model without a Higgs particle
J.W. Moffat, Eur. Phys. J. Plus, 126, 53 (2011)
Atomic photoionization: When does it actually begin? (Vol. 42, No. 4)
image The crest position of the electron wave packet after the end of the XUV pulse is fitted with the straight line, which corresponds to the free propagation. In the inset, extrapolation of the free propagation inside the atom is shown. The XUV pulse is over-plotted with the black dotted line.
Among other spectacular applications of the attosecond streaking technique, it has become possible to determine the time delay between subjecting an atom to a short XUV pulse and subsequent emission of the photoelectron. This observation opened up a question as to when does atomic photoionization actually begin.
We address this question by solving the time dependent Schrödinger equation and by carefully examining the time evolution of the photoelectron wave packet. In this way we establish the apparent "time zero" when the photoelectron leaves the atom. At the same time, we provide a stationary treatment to the photoionization process and connect the observed time delay with the quantum phase of the dipole transition matrix element, the energy dependence of which defines the emission timing.
As an illustration of our approach, we consider the valence shell photoionization of Ne and double photoionization (DPI) of He. In Ne, we relate the opposite signs of the time delays t0(2s)<0 and t0(2p)<0 (Figure) with energy dependence of the p and d scattering phases which is governed by the Levinson-Seaton theorem. In He, we demonstrate that an attosecond time delay measurement can distinguish between the two leading mechanisms of DPI: the fast shake-off (SO) and the slow knockout (KO) processes. The SO mechanism is driven by a fast rearrangement of the atomic core after departure of the primary photoelectron. The KO mechanism involves repeated interaction of the primary photoelectron with the remaining electron bound to the singly charged ion.
Timing analysis of two-electron photoemission
A.S. Kheifets, I.A. Ivanov and Igor Bray, J. Phys. B: At. Mol. Opt. Phys. 44, 101003 (2011)
Practical limits for detection of ferromagnetism (Vol. 42, No. 4)
image Ferromagnetic saturation moment of a ZnO substrate measured in five consecutive stages, exemplifying two of the most common sources of ferromagnetic contamination and showing a type of reversibility upon annealing under different atmospheres, which is often observed in some of the recently discovered nanomagnets mentioned in the text (the detection of ferromagnetism below 5 10-7 emu is hindered by setup-related artefacts).
Over the last ten years, signatures of room-temperature ferromagnetism have been found in thin films and nanoparticles of various materials that are non-ferromagnetic in bulk. The implications of such high temperature ferromagnetism are in some cases so extraordinary, e.g. dilute magnetic semiconductors (DMS) with carrier-mediated ferromagnetism well above room temperature would revolutionize semiconductor-based spintronics, that they triggered an enormous volume of materials research and development. However, the magnetics community soon started realizing the dangers of measuring the very small magnetic moments of these nanomagnets (nanometer sized materials with nano-emu magnetic moments). Pushing state-of-the-art magnetometers to their sensitivity limits, where extrinsic ferromagnetic signals originating from magnetic contamination and measurement artefacts are non-negligible, these new nanomagnets raise a number of challenges to magnetometry techniques and, most of all, to its users' methods and procedures. While new nanomagnets continue being "discovered" based on magnetometry measurements, the general opinion is moving towards the notion that finding a signature of ferromagnetism by means of magnetometry, i.e. a magnetic hysteresis, is only necessary but not sufficient to claim its existence.
Through an extensive analysis of various materials subject to different experimental conditions, the authors aim at re-establishing the reliability limits for detection of ferromagnetism using high sensitivity magnetometry. The paper provides a roadmap describing how extrinsic ferromagnetism can be avoided or otherwise removed, its magnitude when such optimum conditions cannot be guaranteed, and to what extent its characteristics may or may not be used as criteria to distinguish it from intrinsic ferromagnetism.
Practical limits for detection of ferromagnetism using highly sensitive magnetometry techniques
L.M.C. Pereira, J.P. Araújo, M.J. Van Bael, K. Temst and A. Vantomme, J. Phys. D: Appl. Phys. 44, 215001 (2011)
Classical and quantum approaches to the photon mass (Vol. 42, No. 4)
image In new effects of the Aharonov-Bohm type, coherent superpositions of particles possessing opposite electromagnetic properties are used. For the one shown in this figure, charged particles interact with the magnetic vector potential A of a solenoid. If the photon mass is not zero, the electromagnetic interaction is modified. Measuring the corresponding change of quantum phase shift with an interferometer leads to an estimate of mγ.
Since Proca's prediction in 1936 that the rest mass of the photon, mγ, may not be zero, there have been several searches for evidence for a possible finite photon mass. In fact, for even a very small value of mγ, fascinating physical implications arise such as breakdowns of Coulomb's law, wavelength dependence of the speed of light in free space, existence of longitudinal electromagnetic waves, presence of an additional Yukawa potential for magnetic dipole fields, and effects that a photon mass may have during early-universe inflation and the resulting magnetic fields on a cosmological scale.
Traditionally, limits on mγ of < 10-49g have been obtained by means of classical approaches, such as searches for departures from Coulomb's law. What happens if we instead exploit quantum approaches? Could better limits be achieved? This is the novel objective of the present work, in which quantum physics is applied to the photon mass question. We first examine the implications that the Aharonov-Bohm class of quantum effects (Figure) have on searches for mγ, and then move on to explore the quantum electrodynamics scenario with an approach that employs measurements of the electron's g-factor. Within the quantum framework, we show that competitive new lower limits on the photon mass may reach the range 10-54 < mγ < 10-53g. We provide an assessment of the state of the art in these areas and a prognosis for future work.
A survey of existing and proposed classical and quantum approaches to the photon mass
G. Spavieri, J. Quintero, G.T. Gillies and M. Rodriguez, Eur. Phys. J. D 61, 531 (2011)
UV absorption spectroscopy to monitor reactive plasma (Vol. 42, No. 4)
image Absorbance of the HBr gas at three pressures, as used in silicon gate etching processes.
A new high sensitivity technique is developed by extending the broad-band absorption spectroscopy to the vacuum ultraviolet (VUV) spectral region. It is well adapted for the detection and density measurement of closed-shell molecules that have strong electronic transitions in the 110-200 nm range. Among them, molecules such as Cl2, HBr, BrCl, Br2, HCl, BCl3, SiCl4, SiF4, CCl4, SF6, CH2F2 and O2, used in the microelectronics industry for etching or deposition processes, are of prime interest. In our system, the light of a deuterium lamp crosses a 50 cm diameter industrial etch reactor containing the gas of interest. The transmitted light is recorded with a 20 cm focal length VUV scanning spectrometer backed with a photomultiplier tube (PMT). The attached figure shows the absorbance at three pressures of the HBr gas, which is used in silicon gate etching processes. Peaks at 137, 143 and 150 nm, which show a non-linear, but very strong absorbance, correspond to transitions to Rydberg states of the molecule and can be used for the detection of very small HBr densities. In our present experiment, an absorption rate of 2%, corresponding to about 0.03 mTorr of HBr, can be easily detected on the 143 nm absorption peak. Replacing the PMT detector by a VUV sensitive CCD camera, would permit to reach the same signal to noise ratio with a few seconds acquisition time. For HBr pressures in the 1 to 100 mTorr range, the continuum part of the absorption spectrum (160-200 nm), which shows a weak but linear absorbance can be used. The technique is applied to monitor in Cl2-HBr mixture the dissociation rate of HBr and the amount of Br2 molecule formation at different plasma conditions.
Vacuum UV broad-band absorption spectroscopy: a powerful diagnostic tool for reactive plasma monitoring
G Cunge, M Fouchier, M Brihoum, P Bodart, M Touzeau and N Sadeghi, J. Phys. D: Appl. Phys. 44, 122001 (2011)
Flexibility and phase transitions in zeolite frameworks (Vol. 42, No. 4)
image Detail of a zeolite structure built from corner-sharing tetrahedral units.
The zeolites are a group of minerals whose complex and beautiful atomic structures are formed by different arrangements of a very simple building block- a group of four oxygen atoms forming a tetrahedron, with a silicon or aluminium atom at the centre. Each oxygen atom belongs to two tetrahedra, so the structure can be viewed as a network of tetrahedra linked at the corners.
Zeolites have found widespread applications in chemical industry, particularly as catalysts. Their chemical properties depend on the shape of the pores and channels that run through the structure, containing water molecules, ions and even small organic molecules. More than a hundred different frameworks are known to exist in natural minerals or have been synthesised by chemists.
A fundamental geometric question is whether it is possible for the tetrahedra of the framework to exist in an undistorted, geometrically ideal form, or whether distortions are inevitably caused by the linking together of the tetrahedral units to form the structure. A new study links this question to the compression behaviour of zeolites in the analcime group. Four different structures display a common behaviour: they exist in a high-symmetry form at low pressures when the tetrahedra can exist without distortions, but transform to low-symmetry forms under pressure when distortions become inevitable. A deeper understanding of the rules governing the formation of zeolite structures may one day allow us to synthesise structures with specific properties on demand. New insights into the physics and geometry of frameworks are an important step in this direction.
Flexibility windows and phase transitions of ordered and disordered ANA framework zeolites
S. A. Wells, A. Sartbaeva and G. D. Gatta, EPL, 94, 56001 (2011)
Molecular motors in the rigid and crossbridge models (Vol. 42, No. 4)
image Examples of spontaneous oscillations of motor assemblies in the crossbridge model (red) and the rigid model (blue).
In cells, motor proteins use chemical energy to generate motion and forces. Motors often interact and form clusters because they are connected to a single rigid backbone. In a muscle the backbone is made by association of the motor tails. The backbone motion results from the action of all the motors, and feeds back on each motor. Previous works suggest that motor assemblies are endowed with complex dynamical properties, including dynamic instabilities and spontaneous oscillations, which may play a role in the mechanisms of heartbeat, flagellar beating, or hearing. In this paper, we study two models of motor assemblies: the rigid two-state model and the classical crossbridge model widely used in muscle physiology.
Both models predict spontaneous oscillations. In the rigid two-state model, they can have a "rectangular'' shape or a characteristic "cusp-like'' shape that resembles cardiac sarcomere and "stick-slip'' oscillations. The oscillations in the vicinity of the Hopf bifurcation threshold can be much faster than the chemical cycle. This property, not found in the crossbridge model where protein friction slows down the motion, could be important for the description of high frequency oscillations, such as insect wingbeat. Experiments based on the response of a motor assembly to a step displacement are also well described by both theories, which predict non-linear force displacement relations, delayed rise in tension and "sarcomere give''. This suggests that these effects are not directly dependent on molecular details. We also relate the collective properties of the motors to their microscopic properties accessible in single molecule experiments: we show that a three state state crossbridge model predicts the existence of instabilities even in the case of an apparent load decelerated detachment rate.
Dynamical behaviour of molecular motor assemblies in the rigid and crossbridge models
T. Guérin, J. Prost and J-F. Joanny, Eur. Phys. J. E, 34, 60 (2011) |
122eb8809bd99344 | Skip to main content
Simulating Coherent Control with Spectroscopic Accuracy
Final Report Summary - COCOSPEC (Simulating Coherent Control with Spectroscopic Accuracy)
Molecules are fundamental building blocks of the world around us and consist of atoms bound together by shared electrons. The particles which constitute a molecule, the atomic nuclei and the electrons, are in constant and never-ending motion. This internal dynamics, which can be modeled with the help of quantum mechanics, changes in response to light. But could one exploit this response to control molecules by subjecting them to specific types of light? A new field of research called coherent control is based on this premise. The idea is to use light as a chemical reagent to control, for instance, the outcome of chemical reactions. The models of molecules that my colleagues and I have developed allow us to examine and predict different ways to control molecules. Our calculations yield the molecular dynamics in full and complete detail, showing the intricate flow of energy through the molecule in real time, and reproducing the complicated energy-resolved spectra with high accuracy, thus revealing which types of motion dominate the dynamics. Thanks to the precision of our models, our predictions can be directly compared to experiments, and their usefulness extends to completely different fields of research. For instance, we are able to explain important processes in combustion and plasma chemistry, in atmospheric chemistry (with all its implications for global warming) and in astrophysics, including processes that ultimately lead to the birth of new stars.
One exotic type of molecules that we have developed new theory for were recently discovered in laboratories in the Netherlands, Switzerland and the US. These molecules begin life as, for instance, normal hydrogen molecules (H2), but are pumped with energy from lasers so that they become very large, with the distance between the two (bound) atoms reaching almost macroscopic dimensions. When the two atoms approach each other during the vibrational motion, an electron is squeezed out instead, and orbits the molecule at great distance. A good way to think about it is that the molecule tethers on the brink of dissociation (bond-breaking) and ionization (removal of an electron). There is much we still do not understand about these molecules, but our new theory, developed during the Marie Curie IEF, has already helped explain many of the exotic properties and observations of these molecules.
The dynamics of competing ionization and dissociation in a diatomic molecule embodies many of the key challenges facing molecular spectroscopy, such as strong non-adiabatic couplings between electronic and nuclear motion, energy flow between different degrees of freedom (electronic, vibrational, rotational), delicately balanced interference effects between ionization and dissociation continua, complex (overlapping) resonances and internal time-scales spanning orders of magnitude. We have used recently developed time-dependent Multichannel Quantum Defect Theory (MQDT) to obtain complementary time and frequency domain perspectives on the complex dynamics in H2. MQDT is used to solve the stationary, time-independent, Schrödinger equation for the molecular Hamiltonian with all degrees of freedom included, which in turn provides a highly adapted and converged basis for the solution of the time-dependent Schrödinger equation. The calculations yield the molecular dynamics in full detail, providing both a detailed picture of energy flow in real time, and reproducing the complicated energy- resolved spectra with high accuracy. In this context coherent control can be seen as an excellent tool for molecular spectroscopy, providing a creative use of laser pulses and pulse sequences to study molecules, in close analogy to NMR. The results shed light not only on the control mechanisms, but also on the fundamental photodynamics of the ubiquitous H2 molecule. |
2808f799dd5c7952 | Bits of stuff called matter. Photo by Peter Marlow/Magnum
Bits of stuff called matter. Photo by Peter Marlow/Magnum
Minding matter
Adam Frank
Bits of stuff called matter. Photo by Peter Marlow/Magnum
Adam Frank
Brought to you by Curio, an Aeon partner
3,400 words
Edited by Corey S Powell
Syndicate this Essay
Aeon for Friends
Find out more
For physicists, the ambiguity over matter boils down to what we call the measurement problem, and its relationship to an entity known as the wave function. Back in the good old days of Newtonian physics, the behaviour of particles was determined by a straightforward mathematical law that reads F = ma. You applied a force F to a particle of mass m, and the particle moved with acceleration a. It was easy to picture this in your head. Particle? Check. Force? Check. Acceleration? Yup. Off you go.
The equation F = ma gave you two things that matter most to the Newtonian picture of the world: a particle’s location and its velocity. This is what physicists call a particle’s state. Newton’s laws gave you the particle’s state for any time and to any precision you need. If the state of every particle is described by such a simple equation, and if large systems are just big combinations of particles, then the whole world should behave in a fully predictable way. Many materialists still carry the baggage of that old classical picture. It’s why physics is still widely regarded as the ultimate source of answers to questions about the world, both outside and inside our heads.
In Isaac Newton’s physics, position and velocity were indeed clearly defined and clearly imagined properties of a particle. Measurements of the particle’s state changed nothing in principle. The equation F = ma was true whether you were looking at the particle or not. All of that fell apart as scientists began probing at the scale of atoms early last century. In a burst of creativity, physicists devised a new set of rules known as quantum mechanics. A critical piece of the new physics was embodied in Schrödinger’s equation. Like Newton’s F = ma, the Schrödinger equation represents mathematical machinery for doing physics; it describes how the state of a particle is changing. But to account for all the new phenomena physicists were finding (ones Newton knew nothing about), the Austrian physicist Erwin Schrödinger had to formulate a very different kind of equation.
When calculations are done with the Schrödinger equation, what’s left is not the Newtonian state of exact position and velocity. Instead, you get what is called the wave function (physicists refer to it as psi after the Greek symbol Ψ used to denote it). Unlike the Newtonian state, which can be clearly imagined in a commonsense way, the wave function is an epistemological and ontological mess. The wave function does not give you a specific measurement of location and velocity for a particle; it gives you only probabilities at the root level of reality. Psi appears to tell you that, at any moment, the particle has many positions and many velocities. In effect, the bits of matter from Newtonian physics are smeared out into sets of potentials or possibilities.
How can there be one rule for the objective world before a measurement is made, and another that jumps in after the measurement?
It’s not just position and velocity that get smeared out. The wave function treats all properties of the particle (electric charge, energy, spin, etc) the same way. They all become probabilities holding many possible values at the same time. Taken at face value, it’s as if the particle doesn’t have definite properties at all. This is what the German physicist Werner Heisenberg, one of the founders of quantum mechanics, meant when he advised people not to think of atoms as ‘things’. Even at this basic level, the quantum perspective adds a lot of blur to any materialist convictions of what the world is built from.
Then things get weirder still. According to the standard way of treating the quantum calculus, the act of making a measurement on the particle kills off all pieces of the wave function, except the one your instruments register. The wave function is said to collapse as all the smeared-out, potential positions or velocities vanish in the act of measurement. It’s like the Schrödinger equation, which does such a great job of describing the smeared-out particle before the measurement is made, suddenly gets a pink slip.
You can see how this throws a monkey wrench into a simple, physics-based view of an objective materialist world. How can there be one mathematical rule for the external objective world before a measurement is made, and another that jumps in after the measurement occurs? For a hundred years now, physicists and philosophers have been beating the crap out of each other (and themselves) trying to figure out how to interpret the wave function and its associated measurement problem. What exactly is quantum mechanics telling us about the world? What does the wave function describe? What really happens when a measurement occurs? Above all, what is matter?
There are today no definitive answers to these questions. There is not even a consensus about what the answers should look like. Rather, there are multiple interpretations of quantum theory, each of which corresponds to a very different way of regarding matter and everything made of it – which, of course, means everything. The earliest interpretation to gain force, the Copenhagen interpretation, is associated with Danish physicist Niels Bohr and other founders of quantum theory. In their view, it was meaningless to speak of the properties of atoms in-and-of-themselves. Quantum mechanics was a theory that spoke only to our knowledge of the world. The measurement problem associated with the Schrödinger equation highlighted this barrier between epistemology and ontology by making explicit the role of the observer (that is: us) in gaining knowledge.
Not all researchers were so willing to give up on the ideal of objective access to a perfectly objective world, however. Some pinned their hopes on the discovery of hidden variables – a set of deterministic rules lurking beneath the probabilities of quantum mechanics. Others took a more extreme view. In the many-worlds interpretation espoused by the American physicist Hugh Everett, the authority of the wave function and its governing Schrödinger equation was taken as absolute. Measurements didn’t suspend the equation or collapse the wave function, they merely made the Universe split off into many (perhaps infinite) parallel versions of itself. Thus, for every experimentalist who measures an electron over here, a parallel universe is created in which her parallel copy finds the electron over there. The many-worlds Interpretation is one that many materialists favor, but it comes with a steep price.
Here is an even more important point: as yet there is no way to experimentally distinguish between these widely varying interpretations. Which one you choose is mainly a matter of philosophical temperament. As the American theorist Christopher Fuchs puts it, on one side there are the psi-ontologists who want the wave function to describe the objective world ‘out there’. On the other side, there are the psi-epistemologists who see the wave function as a description of our knowledge and its limits. Right now, there is almost no way to settle the dispute scientifically (although a standard form of hidden variables does seem to have been ruled out).
This arbitrariness of deciding which interpretation to hold completely undermines the strict materialist position. The question here is not if some famous materialist’s choice of the many-worlds interpretation is the correct one, any more than whether the silliness of The Tao of Physics and its quantum Buddhism is correct. The real problem is that, in each case, proponents are free to single out one interpretation over others because … well … they like it. Everyone, on all sides, is in the same boat. There can be no appeal to the authority of ‘what quantum mechanics says’, because quantum mechanics doesn’t say much of anything with regard to its own interpretation.
Putting the perceiving subject back into physics seems to undermine the whole materialist perspective
Each interpretation of quantum mechanics has its own philosophical and scientific advantages, but they all come with their own price. One way or another, they force adherents to take a giant step away from the kind of ‘naive realism’, the vision of little bits of deterministic matter, that was possible with the Newtonian world view; switching to a quantum ‘fields’ view doesn’t solve the problem. It was easy to think that the mathematical objects involved with Newtonian mechanics referred to real things out there in some intuitive way. But those ascribing to psi-ontology – sometimes called wave function realism – must now navigate a labyrinth of challenges in holding their views. The Wave Function (2013), edited by the philosophers Alyssa Ney and David Z Albert, describes many of these options, which can get pretty weird. Reading through the dense analyses quickly dispels any hope that materialism offers a simple, concrete reference point for the problem of consciousness.
The attraction of the many-worlds interpretation, for instance, is its ability to keep the reality in the mathematical physics. In this view, yes, the wave function is real and, yes, it describes a world of matter that obeys mathematical rules, whether someone is watching or not. The price you pay for this position is an infinite number of parallel universes that are infinitely splitting off into an infinity of other parallel universes that then split off into … well, you get the picture. There is a big price to pay for the psi-epistemologist positions too. Physics from this perspective is no longer a description of the world in-and-of itself. Instead, it’s a description of the rules for our interaction with the world. As the American theorist Joseph Eberly says: ‘It’s not the electron’s wave function, it’s your wave function.’
A particularly cogent new version of the psi-epistemological position, called Quantum Bayesianism or QBism, raises this perspective to a higher level of specificity by taking the probabilities in quantum mechanics at face value. According to Fuchs, the leading proponent of QBism, the irreducible probabilities in quantum mechanics tell us that it’s really a theory about making bets on the world’s behaviour (via our measurements) and then updating our knowledge after those measurements are done. In this way, QBism points explicitly to our failure to include the observing subject that lies at the root of quantum weirdness. As Mermin wrote in the journal Nature: ‘QBism attributes the muddle at the foundations of quantum mechanics to our unacknowledged removal of the scientist from the science.’
Putting the perceiving subject back into physics would seem to undermine the whole materialist perspective. A theory of mind that depends on matter that depends on mind could not yield the solid ground so many materialists yearn for.
It is easy to see how we got here. Materialism is an attractive philosophy – at least, it was before quantum mechanics altered our thinking about matter. ‘I refute it thus,’ said the 18th-century writer Samuel Johnson kicking a large rock as refutation to arguments against materialism he’d just endured. Johnson’s stony drop-kick is the essence of a hard-headed (and broken-footed) materialist vision of the world. It provides an account of exactly what the world is made of: bits of stuff called matter. And since matter has properties that are independent and external to anything having to do with us, we can use that stuff to build a fully objective account of a fully objective world. This ball-and-stick vision of reality seems to inspire much of materialism’s public confidence about cracking the mystery of the human mind.
Today, though, it is hard to reconcile that confidence with the multiple interpretations of quantum mechanics. Newtonian mechanics might be fine for explaining the activity of the brain. It can handle things such as blood flow through capillaries and chemical diffusion across synapses, but the ground of materialism becomes far more shaky when we attempt to grapple with the more profound mystery of the mind, meaning the weirdness of being an experiencing subject. In this domain, there is no avoiding the scientific and philosophical complications that come with quantum mechanics.
First, the differences between the psi-ontological and psi-epistemological positions are so fundamental that, without knowing which one is correct, it’s impossible to know what quantum mechanics is intrinsically referring to. Imagine for a moment that something like the QBist interpretation of quantum mechanics were true. If this emphasis on the observing subject were the correct lesson to learn from quantum physics, then the perfect, objective access to the world that lies at the heart of materialism would lose a lot of wind. Put another way: if QBism or other Copenhagen-like views are correct, there could be enormous surprises waiting for us in our exploration of subject and object, and these would have to be included in any account of mind. On the other hand, old-school materialism – being a particular form of psi-ontology – would by necessity be blind to these kinds of additions.
A second and related point is that, in the absence of experimental evidence, we are left with an irreducible democracy of possibilities. At a 2011 quantum theory meeting, three researchers conducted just such a poll, asking participants: ‘What is your favourite interpretation of quantum mechanics?’ (Six different models got votes, along with some preferences for ‘other’ and ‘no preference’.) As useful as this exercise might be for gauging researchers’ inclinations, holding a referendum for which interpretation should become ‘official’ at the next meeting of the American Physical Society (or the American Philosophical Society) won’t get us any closer to the answers we seek. Nor will stomping our feet, making loud proclamations, or name-dropping our favourite Nobel-prizewinning physicists.
Rather than trying to sweep away the mystery of mind by attributing it to the mechanisms of matter, we must grapple with the intertwined nature of the two
Given these difficulties, one must ask why certain weird alternatives suggested by quantum interpretations are widely preferred over others within the research community. Why does the infinity of parallel universes in the many-worlds interpretation get associated with the sober, hard-nosed position, while including the perceiving subject gets condemned as crossing over to the shores of anti-science at best, or mysticism at worst?
Kick at the rock, Sam Johnson, break your bones:
But cloudy, cloudy is the stuff of stones.
Adam Frank
Syndicate this Essay
Aeon is not-for-profit
and free for everyone
Make a donation
Get Aeon straight
to your inbox
Join our newsletter
Photo by Richard Kalvar/Magnum Photos
Language and linguistics
The space between our heads
Mark Dingemanse
Big space
Katie Mack |
2f472960ac2b30e2 | Time Evolution
The Time Evolution node is one of the building blocks to observe the dynamics of a system over time. It is inserted inside a time loop which evolves an initial state by solving the time-dependent Schrödinger equation (TDSE).
The node has the following inputs:
• Initial State ($\psi_{0}$): An initial state is required as an input which is evolved over time.
• Hamiltonian (H(t)): It inputs the Hamiltonian node which is evolved over time.
• Time step (dt): A scalar quantity that defines the rate of change of time in the evolution of the system.
At each time step, it will numerically solve the TDSE (Schrödinger equation) and update the intial state to give the time-evolved state.
After the inputs are provided, the node gives the following output:
• Time-evolved state ($\psi_{t}$): The state evolved after each time step.
In the example below, the set-up shows the time evolution of linear superpositioned state in a harmonic oscillator. The Time Evolution node is inserted inside the time loop which evolves the superpositioned state.
The dynamics over time can be seen in the plot when the simulation is running. |
d089ec7d7805cea8 | Seduced by calculus
The 2010 Fields Medal was won by a French mathematician captivated by the crowning mathematical achievement of the Enlightenment. Alex Bellos explains.
Jeffrey Phillips
The French mathematician Cédric Villani is no ordinary looking university professor. Handsome and slender, with a boyish face and a wavy, neck length bob, he looks more like a dandy from the Belle Epoque, or a member of an avant garde student rock band.
He always wears a three-piece suit, starched white collar, lavaliere cravat – the kind folded extravagantly in a giant bow – and a sparkling, tarantula-sized spider brooch. “Somehow I had to do it,” he said of his appearance. “It was instinctive.”
I first met Villani in Hyderabad, India, at the 2010 International Congress of Mathematicians, or ICM, the four-yearly gathering of the tribe. Of the 3,000 delegates, Villani was the focus of most attention, not because he was the most elaborately dressed, but because he received the Fields Medal at the opening gala.
The Fields is the highest honour in maths and is awarded at each ICM to two, three or four mathematicians under the age of 40. The age rule recognises the original motivation behind the prize, which was conceived by the Canadian mathematician J. C. Fields. He wanted not only to recognise work already done, but also to encourage future success. Such is the acclaim afforded by a Fields Medal, however, that since the first two were awarded in 1936, they have helped establish a cult of youth, implying that once you hit 40 you’re past it. This is unfair. Many mathematicians produce their best work after the age of 40, although Fields medallists can struggle to regain focus, since fame brings with it other responsibilities.
Mathematicians gather at the ICM to take stock of their achievements, and the Fields Medal citations provide the clearest snapshot of the most exciting recent work. Unlike the citations for the other three winners in 2010, which were impenetrable to me and even to many of the mathematicians present, Villani’s citation was understandable to the non-specialist. He won “for his proofs of nonlinear Landau damping and convergence to equilibrium for the Boltzmann equation”.
The Boltzmann equation, devised by the Austrian physicist Ludwig Boltzmann in 1872, concerns the behaviour of particles in a gas, and is one of the best known equations in classical physics. Not only is Villani a devotee of the 19th century’s neckwear, he is also a world authority on its applied mathematics.
The Boltzmann equation is what is known as a partial differential equation, or PDE, and it looks like this:
The equation is written in the vocabulary of calculus. Shortly, I’ll explain the symbols. Calculus was the crowning intellectual achievement of the Enlightenment, and Villani’s Fields Medal demonstrates that it remains a rich area of advanced mathematical study. But before we return to the flamboyantly attired Frenchman, we first need to transport ourselves from southern India in 2010 to Sicily in around the third century BCE.
On the front of the Fields Medal is the bearded portrait of Archimedes, basking in the glow of his reputation as the most illustrious mathematician of antiquity. Archimedes, however, is usually remembered for his contributions to physical science, such as the screw that raises water when turned by hand. Yet Plutarch wrote that geometry was his true love. At bath times “while (his servants) were anointing of him with oils and sweet savours, with his fingers he drew lines upon his naked body, so far was he taken from himself, and brought into ecstasy or trance, with the delight he had in the study of geometry”.
The initial task of geometry was the calculation of area. (According to Herodotus, geometry began as a practice devised by Egyptian tax inspectors to calculate areas of land destroyed by the Nile’s annual floods.) As we all know, the area of a rectangle is the width multiplied by the height, and from this formula we can deduce that the area of a triangle is half the base times the height. The Greeks devised methods to calculate the areas of more complicated shapes. Of these, the most impressive achievement was Archimedes’s “quadrature of the parabola”, by which is meant calculation of the area bounded by a line and a parabola, which is a specific type of U-shaped curve. Archimedes first drew a large triangle inside the parabola, as illustrated below, then on either side of this he drew another triangle. On each of the two sides of these smaller triangles, he drew an even smaller triangle, and so on, such that all three points of each triangle were always on the parabola. The more triangles he drew, the closer and closer their combined area was to the area of the parabolic section. If the process was allowed to carry on forever the infinite number of triangles would perfectly cover the desired area.
The quadrature of the parabola.
Archimedes’ quadrature of the parabola is the most sophisticated example from the classical age of the method of exhaustion, the technique of adding up a sequence of small areas that converge towards a larger one. The proof is considered his finest moment because it represents the first “modern” view of mathematical infinity. Archimedes was the earliest thinker to develop the apparatus of an infinite series with a finite limit. This was important not only for conquering the areas of shapes significantly more exotic than the parabola, but also for starting on the conceptual path towards calculus. Of the giants on whose shoulders Isaac Newton would eventually perch, Archimedes was the first.
Infinity is a number bigger than any other. It has a twin concept, the infinitesimal, which is a number smaller than any other, yet still larger than zero.
In the 17th century, mathematicians realised how useful the infinitesimal was, even though it was a concept that didn’t make much sense – it was the mathematical equivalent of having your cake and eating it. The infinitesimal was both something and nothing: large enough to be of mathematical use, but small enough to disappear when you needed it to.
Calculating the area of a circle with infinitesimals.
For example, consider the circle illustrated here. Inside is a dodecagon, a 12-sided shape made up of 12 identical triangles sharing a common vertex, or point. The combined area of the triangles is approximately the area of the circle. If I drew a polygon with more sides within the circle, containing more, thinner triangles, their combined area would approximate the circle more closely. And if I kept on increasing the number of sides, in the limit I would have a polygon with an infinite number of sides containing an infinite number of infinitely thin triangles. The area of each triangle is infinitesimal, yet their combined area is the area of the circle, as illustrated below left.
Here's another way the infinitesimal was useful in determining gradients. For readers who have forgotten what a gradient is, it is the measure of the slope, calculated bydividing the distance moved up by the distance moved along. So, in the illustration below right, the gradient of the road is 1/4 because the distance moved up is 100m and the distance along is 400m. Mathematicians, however, wanted to find a method to calculate the gradient of tangents, which are those lines that touch a curve at a single point.
A gradient
the tangent
The trick to finding the gradient of a tangent at point P is to make an approximation of the tangent, and then to improve the approximation until it coincides with the desired line. We do this by drawing a line through P that cuts the curve at nearby point Q, and then we bring Q closer and closer to P. When Q hits P, the line is the tangent.
The gradient of the line through P and Q is ∆y/∆x. (The Greek letter delta, ∆, is a mathematical symbol meaning a small increment). As Q closes in on P, the value ∆y/∆x approaches the gradient of the tangent at P. But we have a problem. If we let Q actually reach P, then ∆y = 0 and ∆x = 0, meaning that the gradient of the curve at P is 0/0. Bad maths alert! The rules of arithmetic prohibit division by zero! The solution is to keep Q at an infinitesimal distance from P. If we do, we can say that when Q becomes infinitesimally close to P, the value ∆y/∆x is infinitesimally close to the gradient of the curve at P.
Approximating a tangent.
In 1665, Isaac Newton, recently graduated from Cambridge, returned to live with his mother in their Lincolnshire farmhouse. The Great Plague was devastating towns across the country. The university had closed down to protect its staff and students. Newton made himself a small study and started to fill a giant jotter he called the Waste Book with mathematical thoughts. Over the next two years the solitary scribbler, undistracted, devised new theorems that became the foundations of the Philosophiæ Naturalis Principia Mathematica, his 1687 treatise that, more than any work before or since, transformed our understanding of the physical universe. The Principia established a system of natural laws that explained why objects, from apples falling off trees to planets orbiting the Sun, move as they do. Yet Newton’s breakthrough in physics required an equally fundamental breakthrough in maths. He formalized the previous half-century’s work on infinity and infinitesimals into a general system with a unified notation. He called it the method of fluxions, but it became better known as the “calculus of infinitesimals’, and now, simply, “calculus’.
A body that moves changes its position, and its speed is the change in position over time. If a body is travelling with a fixed speed, it changes its position by a fixed amount every fixed period. A car with constant speed that covers 60 miles between 4pm and 5pm is travelling at 60 miles per hour. Newton wanted to solve a different problem: how does one calculate the speed of a body that is not travelling at a constant speed? For example, let’s say the car above, rather than travelling consistently at 60mph, is continually slowing down and speeding up because of traffic. One strategy to calculate its speed at, say, 4.30pm, is to consider how far it travels between 4.30pm and 4.31pm, which will give us a distance per minute. (We just need to multiply the distance by 60 to get the value in mph.) But this figure is just the average speed for that minute, not the instantaneous speed at 4.30pm. We could aim for a shorter interval – say, the distance travelled between 4.30pm and 1 second later, which would give us a distance per second. (We’d then multiply by 3,600 to get the value in mph). But again this value is the average for that second. We could aim for smaller and smaller intervals, but we are never going to get the instantaneous speed until the interval is tinier than any other – when it is zero, in other words. But when the interval is zero, the car does not move at all!
This line of reasoning should sound familiar, because I used it two paragraphs ago when explaining how to calculate the gradient of a tangent. To find the gradient we divide an infinitesimally small quantity (length) by another infinitesimally small quantity (another length). To get the instantaneous speed we also divide an infinitesimally small quantity (distance) by another infinitesimally small quantity (time). The problems are mathematically equivalent. Newton’s method of fluxions was a method to calculate gradients, which enabled him to calculate instantaneous speeds.
Calculus allowed Newton to take an equation that determined the position of an object, and from it devise a secondary equation about the object’s instantaneous speed. It also allowed him to take an equation determining the object’s instantaneous speed, and from it devise a secondary equation about position, which, as it turned out, was equivalent to the calculation of areas using infinitesimals! Calculus, therefore, gave him the mathematical tools to develop his laws of motion. In his equations, he called the variables x and y “fluents” and the gradients “fluxions’, written by the “pricked letters” ẋ and ẏ.
When Newton returned to Cambridge after two years avoiding the plague in Lincolnshire, he did not tell anyone about the method of fluxions. On the continent, Gottfried Leibniz was developing an equivalent system. Leibniz was German by birth but a man of the world – a lawyer, diplomat, alchemist, engineer and philosopher. Leibniz was also the mathematician most obsessed with notation. The symbols he used for his system of calculus were clearer than Newton’s, and are the ones we use today.
Leibniz introduced the terms dx and dy for the infinitesimal differences in x and y. The gradient, which is one infinitesimal difference divided by the other, he wrote dy/dx. Thanks to his use of the word “difference’, the calculation of gradient became known as “differentiation’. Leibniz also introduced the distinctive stretched “s’, ∫, as the symbol for the calculation of area. It’s an abbreviation of summa, or sum, since the calculation of area is based on infinite sums of infinitesimals. On the suggestion of his friend Johann Bernoulli, Leibniz called his technique calculus integralis, and the calculation of area became known as “integration’. Leibniz’s ∫ is the most majestic symbol in maths, reminiscent of the f-hole of a cello or violin.
Calculus comprises differentiation (computation of gradient) and integration (computation of area). In general terms, gradient is the rate of change of one quantity over another, and area is the measure of how much one quantity accumulates with respect to another. Calculus thus provided scientists with a way to model quantities that varied in relation to each other. It is a formidable instrument to explain the physical world because everything in the universe, from the tiniest atoms to the largest galaxies, is in a state of permanent flux.
When we know the relationship between two varying quantities, we can describe them in an equation using the symbols for differentiation and integration. An equation in x and y that includes the term dy/dx is called a “simple differential equation’. If there are more than two variables, say x, y and t, the rates of change are written ∂y/∂x, or ∂y/∂t, with the rounded ∂. The equation is called a “partial differential equation’, or PDE, because terms like ∂y/∂x tell us how one variable changes with respect to another one, but not to all of them. PDEs dominate applied mathematics. They allow scientists to make predictions. If we know how two quantities vary over time, then we can predict exactly what state they will be in at any time in the future. Maxwell’s equations, which explain the behaviour of magnetic and electric fields, the Schrödinger equation, which underlies quantum mechanics, and Einstein’s field equations, which are the basis of general relativity, are all PDEs.
The first important PDE described the behaviour of a violin string when bowed, a problem that had tormented scientists for decades. It was discovered in 1746 by Jean le Rond d’Alembert, the celebrity mathematician of his day. D’Alembert, the product of a brief liaison between an artillery general and a lapsed nun, was abandoned after he was born and left on the steps of the church Saint Jean Le Rond, next to Notre-Dame Cathedral in Paris, from which he took his name. Brought up by the wife of a glazier, he rose against the odds to become the permanent secretary of the Académie Française. As well as being a serious mathematician, he was also a vociferous apologist for the values of the Enlightenment. He was a public figure, a sought-after guest at aristocratic salons and one of the editors of the landmark Encyclopédie, for which he wrote the preliminary discourse and more than a thousand articles.
D’Alembert was the prototype French scientific intellectual, a role now occupied with gusto by Cédric Villani.
The second time I met Villani was in Paris. Since 2009 he has been director of the Institut Henri Poincaré, France’s elite maths institute, which is situated among the universities of the Latin Quarter. His office is a comfortable clutter of books, paper, coffee mugs, awards, puzzles and geometrical shapes.
Villani’s appearance was unchanged since we met in India at the International Congress of Mathematicians: burgundy cravat, blue three-piece suit, and a metal spider glistening on his lapel. He said his look emerged when he was in his twenties. He wore shirts with large sleeves, then with lace, then a top hat… “It was like a scientific experiment, and gradually it was ‘this is me’.” And the spider? He enjoys its ambiguity. “Some people think the spider is a maternal symbol. Others think that the web is a symbol for the universe, or that the spider is the big architect of the world, like a way to personify God. Spiders don’t leave people indifferent. You immediately have a reaction.” The spider is an archetype rich with interpretations, I thought, just like mathematics is an abstract language with innumerable applications. Villani’s field is PDEs. Even though PDEs have been around for almost three centuries, he says they are “for a large part still poorly understood. Each PDE seems to have a theory of its own. You have many sub-branches of PDEs with only a small common basis and no general classification. People have tried to classify them, but even the best specialists have failed.” The PDE that has occupied most of Villani’s time is the Boltzmann equation. It was the subject of his PhD and formed part of the subsequent work that led to his Fields Medal. He now views it with tenderness and devotion. “It’s like the first girl you fall in love with,” he confided. “The first equation you see – you think it is the most beautiful in the world.” Feast your eyes on her again:
The Boltzmann equation belongs to the field of statistical mechanics: the branch of mathematical physics that investigates how the behaviour of individual molecules in a cloud of gas influences macroscopic properties like temperature and pressure. The equation describes how a gas disseminates by considering the likelihood of any of its molecules being in any particular spot, with a particular speed, at a particular time. [The f is a “probability density function’, that gives the probability of particles having a position near x and a speed near v at time t.] The model assumes that particles in a gas bounce around according to Newton’s laws, but in random directions, and describes the effects of their collisions using the maths of probability. Villani pointed at the left side of the equation: “This is just particles going in straight lines.” He pointed to the right side of the equation: “And this is just shock. Tik-ding! Ting-dik!” He bumped his fists together several times. “Often in PDEs, you have tension between various terms. The Boltzmann equation is the perfect case study because the terms represent completely different phenomena and also live in completely different mathematical worlds.”
If you filmed a single gas particle bouncing off another gas particle, and showed it to a friend, there is no way he or she would know whether you were playing the film forwards or backwards, since Newton’s laws are time-reversible. But if you filmed a gas spreading from a beaker to its surroundings, a viewer would instantly be able to tell which way the film was being played, since gases do not suck themselves back into beakers. Boltzmann established a mathematical foundation for the apparent contradiction between micro- and macroscopic behaviour by introducing a new concept, entropy. This is the measure of disorder – in theoretical terms the number of possible positions and speeds of the particles at any time. Boltzmann then showed that entropy always increases. Villani’s breakthrough paper concerned just how fast entropy increases before reaching the totally disordered state.
The Boltzmann equation has straightforward applications, such as in aeronautical engineering, to determine what happens to planes when they fly through gases. Its usefulness is what first appealed to Villani when he embarked on his PhD. But as he became more intimate with the equation, its beauty seduced him. He compares it to a Michelangelo sculpture: “Not pure and ethereal and elegant, but very human, very tortured, with the strength of the energy of the world. In the equation you can hear the roar of the particles, full of fury.” He added that he prefers to spend years studying well-known equations, trying to find new insights into them, rather than inventing new concepts. “It’s what I like, and it’s part of a general attitude that says, ‘Hey, guys! High-energy physics, the Higgs boson, string theory or whatever – it may all be fascinating, but remember we still don’t understand Newtonian mechanics.’ There are still many, many open problems.” He showed me a PDE in a book. “Does this equation have smooth solutions? Nobody in hell knows that!” He shrugged his shoulders, his forehead criss-crossed with lines.
Latest Stories
MoreMore Articles |
c3e329917200801a | Symposium EN03.03 Green Electrochemical Energy Storage Solutions—Materials, Processes and Devices
More than Pretty Pictures - The Importance of Creating Compelling and Honest Visuals
Meet MRS Award Recipients—Lightning Talks and Panel Discussion
The MRS Awards session consisted of 10-15 minute “lightning talks” by each recipient. This was followed by a panel discussion in which the materials scientists responded to audience questions about the inspirations and challenges they encountered on their path to their award-winning research.
Award Recipient Lightening Talk_800x800
David Turnbull Lectureship
Paula T. Hammond, Massachusetts Institute of Technology
Making Sticky Particles for Better Medicine
Paula Hammond’s group creates custom nanoparticles using conformal layer-by-layer growth of polyelectrolytes or other materials around a core that could contain a drug or an imaging agent. The layer properties are chosen to tune the stickiness of the particles in different environments. In one example, a particle containing a chemotherapy drug was protected by an outer layer that was stripped off in the acidic tumor environment. The particles can also be decorated with hyaluronic acid that binds to a receptor that is overexpressed in tumors. The 40-fold higher accumulation in mouse lung tumors and release of RNA extended the lives of the mice. Hammond also showed other particles that stay on the cell surface to release their payload to neighboring cells, as well as particles tailored to infiltrate cartilage to treat osteoarthritis.
MRS Medal Award
Catherine J. Murphy, University of Illinois at Urbana-Champaign
A Golden Time for Nanotechnology
Catherine J. Murphy and her group have developed controlled production of gold nanoparticles using seed-mediated growth in aqueous solutions. This technique gives them control of the shape as well as the size of the particles. Both attributes modify the particles’ optical properties, which arise from plasmon resonances of the sub-wavelength particles. Murphy has explored the detailed growth mechanisms, including the roles of surfactants and silver in the final particle shape. She has also investigated the biological effects of nanoparticles from the molecular and cellular level up to the ecosystem level, where they can alter the populations of various microbial species.
MRS Medal Award
Haimei Zheng, Lawrence Berkeley National Laboratory
Real-Time Imaging of Nanoscale Materials Transformations in Liquids
Haimei Zheng and her colleagues use transmission electron microscopy with specialized liquid cells to study how materials transform between states in real time. Her studies provide insight into the electrode-electrolyte interface in batteries, catalytic processes, and other important problems. Zheng often sees individual events that deviate from textbook theory. For example, depending on surfactant concentrations, Pt-Fe particles can grow by attaching to the end of chains, rather than classical nucleation above a critical size where volume free energy overcomes surface free energy. Cobalt oxide nanoparticles tend to form two-dimensional nanosheets, which can be modeled by including an edge free energy.
Materials Theory Award
Lu Sham, University of California, San Diego
Quantum Aspect of the Density Functional Theory
Almost 60 years after Lu Sham and his colleagues developed the density functional theory (DFT), it remains a core technique for calculating electronic structure. Pierre Hohenberg and Walter Kohn had showed that the ground state energy of a complex atomic arrangement could be described in terms of the spatially dependent electron density. Kohn and Sham reformulated these results in an intuitive and useful effective potential for a single-electron Schrödinger equation with an effective potential. “Lots of people didn’t believe it at the beginning,” Sham said, but it has been adapted for a wide variety of calculations, including for quantum phase transitions.
The Kavli Foundation Early Career Lectureship in Materials Science
Silvia Vignolini, University of Cambridge
Color Engineering—From Nature to Applications
Silvia Vignolini’s group seeks to understand how nature creates color using nanoscale structural arrangements of materials that are renewable and biodegradable. In particular, they have exploited helicoidal structures featuring periodically varying orientations of fibers. The fibers are composed of polysaccharides such as cellulose, which is safe and available in large quantities. Vignolini showed a video of color rapidly popping up in an evaporating liquid when cellulose nanocrystals self-assembled. The materials can be created over large areas, printed in patterns, and even made into holograms. They can also be eaten, although Vignolini said they taste like paper.
MRS acknowledges the following individuals for generous contributions to support these awards: MRS Medal and Materials Theory Award, endowed by Toh-Ming Lu and Gwo-Ching Wang.
The comments to this entry are closed. |
9ec5a95cc11bbe6f | Take the 2-minute tour ×
In mathematical derivations of physical identities, it is often more or less implicitly assumed that functions are well behaved. One example are the Maxwell identities in thermodynamics which assume that the order of partial derivatives of the thermodynamic potentials is irrelevant so one can write.
Also, it is often assumed that all interesting functions can be expanded in a Taylor series, which is important when one wants to define the function of an operator, for example $$e^{\hat A} = \sum_{n=0}^\infty \frac{(\hat A)^n}{n!}.$$
Are there some prominent examples where such assumptions of mathematically good behavior lead to wrong and surprising results? Such as... an operator $f(\hat A)$ where $f$ cannot be expanded in a power series?
share|improve this question
I'm wikifying this since it's a list question without a single correct answer. – David Z Aug 6 '11 at 16:08
5 Answers 5
up vote 8 down vote accepted
I think, the most transparent example is phase transition: by definition it is when some thermodynamic value does not behave well.
AFAIK when Fourier showed that non-continuous function may be presented as an infinite sum of continuous, he had a hard time convincing people around that he is not crazy. That story might partially answer your question: as long as any not-so-well-behaved function may be presented as a sum of smooth ones, there is no much difference as long as good formulated laws are linear. Functions which are really bad behaved usually do not appear in real problems. If they do, there is some significant physics behind it (as with phase transition, shock wave, etc.) and one can not miss it.
For an operator it is better (for physicist) to think of function from operator as a function acting on its eigenvalues (if it is not diagonalizable, in physics it is bad behaviour). This is equivalent to power series definition, but works for any function.
share|improve this answer
I have had a surprizing result due to the wave function having different left and right derivatives at a point (see Chapter 2.1 and Appendix 3). Generally this article contains more surprizing results just due to implicit assumptions being wrong.
share|improve this answer
Well, I know that when one solves the 1D Schrödinger equation for a potential $-\gamma \delta(r-a)$, then the left- and right-derivatives of the wavefunction $\Psi(r)$ differs by $\gamma \Psi(a)$ at that point. Is that what you're referring to? – Lagerbaer Aug 7 '11 at 15:10
@Lagerbaer Yes, to some extent. My perturbation is like $\delta(z-z_1) \frac{d}{dz}$. – Vladimir Kalitvianski Aug 8 '11 at 13:11
Well, I don't know if you want to count that, but QFT is full with functions that have poles which I'd call not well-behaved, and it does have lots of physical effects. If you're talking about observables only, you can approximate any discontinuous function to arbitrary precision with a continuous function, and you can push the difference below measurement precision. The reason one sometimes uses 'ill-behaved' functions (delta, heaviside, etc) is that they're easier to deal with.
share|improve this answer
But the poles in QFT are probably something people are fully aware of? I am thinking more about identities where in the proof an assumption about well behaved functions is made that then can get overlook when one just plugs some function into it – Lagerbaer Aug 8 '11 at 15:34
This principle fails in the most startling way in second order phase transitions. This is a particularly clean example, because Landau predicted the critical exponents of second-order phase transitions using only the principle that the thermodynamic functions are analytic.
His argument is as follows: given a magnet going through the Curie point, where it loses its magnetization smoothly, the equilibrium magnetization should be the solution of some thermodynamic equation with the derivative some thermodynamic potential is set to zero.
At temperatures lower than T_c, the magnetization is nonzero, and at temperatures higher than Tc, the magnetization is 0, and it goes to 0 in a continuous way. How does it go to zero?
Note that the magnetization m and -m are related by rotational symmetry. Shifting T_c to 0 by translating $f(t,m)= F(T_c - t,m)$, you get a new thermodynamic function, which has the property that f has only the trivial solution m=0 for negative t, and has two small nontrivial solutions in m for positive t.
Because m=0 is a solution at t=0, the function $f$ has no constant term in a Taylor expansion. By the symmetry of $m\rightarrow -m$, only even powers of m contribute to its Taylor series.
$f(t,m) = At + Bm^2 + Ct^2 + Dt^3 + E t m^2 ...$
Assuming that $f(t,m)$ is generic, A and B are not exactly zero. So for small enough t, for temperatures close enough to the critical point, you get that
$m \propto \sqrt{t}$
Further, this scaling only fails if one of the coefficients is zero. If A=0,
$m \propto |t|$
But m is then nonzero on both sides of the transition. If B=0, you get
$m \propto t^{1\over 4}$
and m is zero or if A,B,C are zero, in which case you get
$m \propto |t|^{3/4}$
And each of these cases requires fine tuning of parameters. So Landau predicted that the critical behavior of the magnetization will be as the square root of the temperature at the critical point, and that this behavior will be universal, it won't depend on the system, just on the existence of the phase transition. The Ising model should have the same critical exponent as the physical magnet, a square root dependence of the magnetization on the temperature, and the liquid gas transition will also have a bend in the curve of the density vs. temperature at the critical pressure which goes as the square root.
The exponent turned out to be universal, it was equal for the gas and liquid, and for the Ising model. But it wasn't 1/2, but more like .308 in three dimensions, and .125 in two dimensions, It only turned into Landau's 1/2 in 4 dimensions or higher. This means that Landau's argument fails, and that the thermodynamic function is conspiring to be non-analytic at exactly the place where Landau was expanding. Understanding why it is non-analytic exactly at the phase transition led to the modern renormalization theory.
In mathematics, Rene Thom proposed that a version of Landau's argument is a complete theory of the types of allowed phase transitions in nature. He called the phase transitions "catastrophes", because they showed a sudden change in behavior, and he predicted, based on catastrophe theory all sorts of scaling laws for natural transitions. This was the most ambitious attempt to exploit the observation that naturally occuring functions are nice. This fails for the same reason as Landau's argument: functions describing changes in the critical behavior of interesting systems at a transition point are rarely analytic at this point.
share|improve this answer
A nice example arises for the "rigorous coupled wave analysis" (RCWA) method (also called Fourier Modal Method), which is used as a Maxwell solver for diffraction gratings. The normal component of the electric field is discontinuous in the normal direction of a material interface. This leads to convergence problems of the RCWA method for TM polarization, because the discontinuous electric field component is expanded into a Fourier series and multiplied by another discontinuous function representing the grating geometry. Many modifications of the RCWA method to overcome this convergence problem where proposed, but the "correct" modification was only discovered in 1996 (by P. Lalanne and M. Morris?). Even so Lifeng Li didn't discover that "correct" modification, he wrote the famous paper "Use of Fourier series in the analysis of discontinuous periodic structures" (also in 1996) which analyzed mathematically what goes wrong (multiplication of "approximations of" discontinuous functions is dangerous) and why the latest proposed modification to the RCWA methods finally solved the convergence problem.
Today, the Fourier Modal Methods are the most efficient and accurate for many types of grating problems.
share|improve this answer
Your Answer
|
ee6db1525fc504dc | About this Journal Submit a Manuscript Table of Contents
Journal of Function Spaces and Applications
Volume 2013 (2013), Article ID 968603, 13 pages
Research Article
Energy Scattering for Schrödinger Equation with Exponential Nonlinearity in Two Dimensions
School of Mathematical Sciences, Peking University, Beijing 100871, China
Received 9 January 2013; Accepted 24 February 2013
Academic Editor: Baoxiang Wang
When the spatial dimensions , the initial data , and the Hamiltonian , we prove that the scattering operator is well defined in the whole energy space for nonlinear Schrödinger equation with exponential nonlinearity , where .
1. Introduction
We consider the Cauchy problem for the following nonlinear Schrödinger equation: in two spatial dimensions with initial data and . Solutions of the above problem satisfy the conservation of mass and Hamiltonian: where
Nakamura and Ozawa [1] showed the existence and uniqueness of the scattering operator of (1) with (2). Then, Wang [2] proved the smoothness of this scattering operator. However, both of these results are based on the assumption of small initial data . In this paper, we remove this assumption and show that for arbitrary initial data and , the scattering operator is always well defined.
Wang et al. [3] proved the energy scattering theory of (1) with , where and the spatial dimension . Ibrahim et al. [4] showed the existence and asymptotic completeness of the wave operators for (1) with when the spatial dimensions , , and . Under the same assumptions as [4], Colliander et al. [5] proved the global well-posedness of (1) with (2).
Theorem 1. Assume that , , and . Then problem (1) with (2) has a unique global solution in the class .
Remark 2. In fact, by the proof in [5], the global well-posedness of (1) with (2) is also true for .
In this paper, we further study the scattering of this problem. Note that . Nakanishi [6] proved the existence of the scattering operators in the whole energy space for (1) with when . Then, Killip et al. [7] and Dodson [8] proved the existence of the scattering operators in for (1) with . Inspired by these two works, we use the concentration compactness method, which was introduced by Kenig and Merle in [9], to prove the existence of the scattering operators for (1) with (2).
For convenience, we write (1) and (2) together; that is, where and . Our main result is as follows.
Theorem 3. Assume that the initial data , , and . Let be a global solution of (5). Then
In Section 2, Lemma 9 will show us that Theorem 3 implies the following scattering result.
Theorem 4. Assume that the initial data , , and . Then the solution of (5) is scattering in the energy space .
We will prove Theorem 3 by contradiction in Section 5. In Section 2, we give some nonlinear estimates. In Section 3, we prove the stability of solutions. In Section 4, we give a new profile decomposition for sequence which will be used to prove concentration compactness.
Now, we introduce some notations:
We define
For Banach space , , or , we denote
When , is abbreviated to . When or is infinity or when the domain is replaced by , we make the usual modifications. Specially, we denote
For , we split , where
For any two Banach spaces and , . denotes positive constant. If depends upon some parameters, such as , we will indicate this with .
Remark 5. Note that in Theorem 3; we only need to prove the result for , . Hence, we always suppose that in the context.
Moreover, we always suppose that the initial data of (5) satisfies and .
2. Nonlinear Estimates
In order to estimate (2), we need the following Trudinger-type inequality.
Lemma 6 (see [10]). Let . Then for all satisfying , one has
Note that for for all ,
By Lemma 6 and Hölder inequality, for and for all , we have and thus
Lemma 7 (Strichartz estimates). For or , (the pairs were called admissible pairs) we have
Lemma 8 (see [3, Proposition 2.3]). Let be fixed indices. Then for any ,
As shown in [6, 11], to obtain the scattering result, it suffices to show that any finite energy solution has a finite global space-time norm. In fact, if Theorem 3 is true, we have the following theorem.
Lemma 9 (Theorem 3 implies Theorem 4). Let be a global solution of (5), , and . Then, for all admissible pairs, we have
Moreover, there exist such that
Proof. Defining , , by Strichartz estimates, (14) and (15),
Using the same way as in Bourgain [12], one can split into finitely many pairwise disjoint intervals:
By (21),
Since and can be chosen small arbitrarily, by interpolation, for all admissible pairs and . The desired result (19) follows.
By (19) and (21),
Thus, were well defined and belong to . Since we must have (20) was proved.
3. Stability
Lemma 10 (stability). For any and , there exists with the following property: suppose that satisfies for all , and approximately solves (5) in the sense that
Then for any initial data satisfying and , there is a unique global solution to (5) satisfying .
Proof. Denote , then and . Let . By the similar estimates as (21), we have
Then we subdivide the time interval into finite subintervals , , such that for each . Let be small such that
Then by (31) on , we have and
Using the same analysis as above, we can get . Iterating this for , we obtain ; the desired result was obtained.
4. Linear Profile Decomposition
In this section, we will give the linear profile decomposition for Schrödinger equation in . First, we give some definitions and lemmas.
Definition 11 (symmetry group, [13]). For any phase , position , frequency , and scaling parameter , we define the unitary transformation by the formula
We let be the collection of such transformations; this is a group with identity , inverse , and group law
If is a function, we define , where by the formula or equivalently
If , we can easily prove that and .
Definition 12 (enlarged group, [13]). For any phase , position , frequency , scaling parameter , and time , we define the unitary transformation by the formula or in other words
Let be the collection of such transformations. We also let act on global space-time function by defining or equivalently
Lemma 13 (linear profiles for sequence, [14]). Let be a bounded sequence in . Then (after passing to a subsequence if necessary) there exists a family , of functions in and group elements for such that one has the decomposition for all ; here, is such that its linear evolution has asymptotically vanishing scattering size:
Moreover, for any ,
Furthermore, for any , one has the mass decoupling property For any , we have
Remark 14. If the orthogonal condition (45) holds, then (see [14])
Moreover, if , then (see [14, 15]), for any , If , then (see [16, Lemma 5.5])
Remark 15. As each linear profile in Lemma 13 is constructed in the sense that weakly in (see [14]), after passing to a subsequence in , rearrangement, translation, and refining accordingly, we may assume that the parameters satisfy the following properties:(i) as , or for all ;(ii) or as , or for all ;(iii) as , or with ;(iv)when , and , we can let .
Our main result in this section is the following lemma.
Lemma 16 (linear profiles for sequence). Let be a bounded sequence in . Then up to a subsequence, for any , there exists a sequence in and a sequence of group elements such that
Here, for each , and must satisfy is such that
Moreover, for any , one has the same orthogonal conditions as (45). For any , one has the following decoupling properties:
Proof. Let
Then, we have
By Lemma 13, after passing to a subsequence if necessary, we can obtain with the stated properties (i)–(iv) in Remark 15 and (43)–(47). Denote
Step 1. We prove that with and for each fixed , where
By (44) and , (64) holds obviously. For (62), we prove it by induction. For every , suppose that
Case 1. If , we have .
In fact, by (66),
Using (47),
By direct calculation,
Let . When , When , When , When ,
By (68)–(74), and thus .
Case 2. If , we can prove
By absorbing the error into , we can suppose . Since for each fixed , we must have .
Now, we begin to prove (75). Let be the characteristic function of the set and , and then where
Note that We have
When , we have . Choosing , then by (79), , the desired result follows.
When and , we have
When and , we denote and . The line (when , we use the line instead) separates the frequency space into two half-planes. We let to be the half-plane which contains the point , and then
By (79), we have . Note that (75) holds.
When and , let be the half-plane which does NOT contain the point ; we can prove (75) similarly as above.
By the proof above, we get and . Denote and suppose
Repeating the proof above, we can get , , and ; by induction, we obtain (62).
By the orthogonal condition (45), following the proof in [14], we can obtain that for fix and for all , (63) were proved.
Step 2. For arbitrary , we define if the orthogonal condition (45) is NOT true for any subsequence; that is,
By the definition above, if , we have
Note that By Remark 15, we can put these two profiles together as one profile. Then, by denoting / as , we can obtain the sequence , ; and (52)–(56) were proved.
Specially, since for each , we have for fixed and , and hence for any fixed and .
Step 3. We prove (57) now. By (56), we only need to prove that for all , ,
As and for , By (54), we have
We separate the set into two subsets:
When ,
Hence, in order to prove (90), one only needs to prove
If and , for a function , we have
By approximating by in and sending , we have . Note that ; we obtain for all .
If and , we have orthogonal condition for any . Thus, |
ef5457d297269b20 | Take the 2-minute tour ×
My question is in the title. I do not really understand why water is not a superfluid. Maybe I make a mistake but the fact that water is not suprfluid comes from the fact that the elementary excitations have a parabolic dispersion curve but for me the question remain. An equivalent way to ask it is: why superfluid helium is described by Gross-Pitaevsky equation and it is not the case for water?
share|improve this question
Recent work actually suggests that water may have a superfluid liquid phase – user20145 Jan 23 '13 at 15:24
@x you have to substantiate this claim by a reference or link and a quote, at least from the abstract. – anna v Jan 23 '13 at 15:31
2 Answers 2
Because water is liquid at much too high a temperature. Helium is only superfluid near absolute zero. To have a superfluid, you need the quantum wavelength of the atoms given the environmental decoherence to be longer than the separation between the atoms, so they can coherently come together.
share|improve this answer
You refer to the Landau criterion for superfluidity (there is a separate question whether this is really the best way to think about superfluids, and whether the Landau criterion is necessary and/or sufficient). In a superfluid the low energy excitations are phonons, the dispersion relation is linear $E_p\sim c p$, and the critical velocity is non-zero. In water the degrees of freedom are water molecules, the dispersion relation is quadratic, $E_p\sim p^2/(2m)$, and the critical velocity is zero.
The Gross-Pitaevskii equation applies (approximately) to Helium, because in the superfluid phase there is a single particle state which is macroscopically occupied. The GP equation describes the time evolution of the corresponding wave function. In water there are no macroscopically occupied states. You can try to solve the full many-body Schroedinger equation, but at least approximately this problem reduces to cassical kinetic theory.
I think the best criterion for superfluidity is irrotational flow: The non-classical moment of inertia, quantization of circulation, and persistent flow in a ring. Again, these don't appear in water because there is no spontaneous symmetry breaking, and no macroscopically occupied state.
share|improve this answer
So now my question is why there is no macroscopically occupied state for water ans there is one for helium? In general we don't try to solve Schrödinger equation for helium in order to obtain GP equation, isnt'it? And how can I obtain a classical kinetic equation for water starting from Schrödinger ? – PanAkry Sep 24 '12 at 6:56
A rough criterion is the condition for Bose condensation in an ideal gas, $n\lambda^3\sim 1$, where $n$ is the density and $\lambda$ is the thermal wave length. Note that your question is in some sense backwards: Helium is the exception, water is the rule. Most ordinary fluids solidify instead of becoming superfluid at low $T$. – Thomas Sep 24 '12 at 12:38
Your Answer
|
fb638f2583d02c8a | Academy of Achievement Logo
Ehud Barak
Interview: Ehud Barak
Former Prime Minister of Israel
May 4, 2001
San Antonio, Texas
Back to Ehud Barak Interview
You were born on a kibbutz. Tell us something about life growing up on a kibbutz in Israel.
Ehud Barak: It was kind of a communal farm, where some 60 families were living together, supposedly according to a principle that everyone should give according to his skills and get according to his needs, but with a very modest interpretation of human needs.
In present terms it would be called poverty but we never felt this way. You know, my parents had a residence which was a room of 12 by nine foot, no running water, no toilets. The whole commune was dining collectively in one big room that was called the dining room. And even the bath where you could take a shower was some collective installation. Two of them, of course, for males and females, but that was the only kind of differentiation. And the -- you might say long hard working week, from early dawn to sunset. We, the kids, were raised from age of zero in kind of collective dormitories apart from our parents, but still I recall it as a kind of -- I must say happy--kind of happy warm childhood. We felt that -- I at least, was kind of lucky to get a lot of warm care, kind of --not just nurturing but in a way coaching by my parents in the four or five hours a day, high quality time, the five or -- four or five hours that we spent together every day. And, you know, I was there for 18 years and a few years into my adult life but I still remember it very warm. It was remote, far isolated, small, tough conditions, but somehow we felt a part of the emerging nation of Israel, part of the Jewish world, part of the world as a whole. We had a small radio. We listened to everything that happened around the world as kids, as young kids, and somehow our parents gave us the feeling of being both well taken care of young individuals and part of something wider than we as individuals are.
Did you also live with a feeling of danger?
Ehud Barak: Not at the time of my childhood.
I was born in the middle of World War II. Rommel divisions were at the gate of Egypt and there was for some times a real feel that he will take over the Middle East from the British. This was about the time that the parents of my mother were taken to Treblinka, not that we knew it at the time. The parents of my father were murdered in a pogrom before the first World War when he was two-and-a-half years old so I never knew my grandparents.
In retrospect I should have grown up in an unconfident kind of environment, but this was not the case.
I was eight years-and-a-half when the State of Israel was established and I still remember the evening, the counting of votes at Lake Success, and the eruption of kind of emotions immediately afterwards from all around the kibbutz. All the kibbutz practically went around a campfire and danced to the morning, and by the morning we were at war. And at a certain point we could hear the motors of the Iraqi unit that came and almost cut Israel into two. We were three or four miles from the seashore at the very narrowest part of Israel. And we at one morning could hear the mortars but I didn't recall a sense of fear all along these years. We became aware of the price where one member of the kibbutz was fallen during the war and later on when I was a youngster and elderly youngsters in the kibbutz joined the army and one of them was killed, but basically I don't remember fear. Maybe the close climate around us kind of isolated us and maybe my parents or our parents deliberately isolated us from the fears of life at this early stage.
Ehud Barak Interview Photo
I remember it as something very warm, kind of supportive and encouraging. I remember my father taught me everything, how to play chess, how to climb a tree. Always, whenever I looked, I knew that his hand somehow was behind me, to make sure that I would not fall, or at least not fall from too high a part of the ladder.
What kind of student were you as a young man?
Ehud Barak: I was shy, small sized, almost tiny, always behind the wave of coming to maturity. I joined the school at the age of five plus a few months. I was a shy, introverted boy, totally not in tune with the rest of the group. We were a very small group as a result of the size of the commune, but out of 13 or 14 boys and girls in the class, I was a little bit strange when I look at it in hindsight.
I never played basketball. In fact, until the age of 12, I couldn't throw the ball so it will reach the basket. The girls in my class, most of them, were running the 60-yard track faster than me. I tried once to play soccer but ended up in the defense kicking the knees of the other side rather than the ball, not deliberately but -- So I concentrated on reading. I read a lot. I played the piano.
I found my own way. I picked locks. I was highly interested in mechanics, in fine mechanics, understanding how things work. I was very clumsy in the big motoric movements and very accurate in the delicate ones. Nothing to predict a future decorated soldier or general or leader.
Were there any books that were important to you when you were young?
Ehud Barak Interview Photo
Ehud Barak: At the beginning, I read what my father gave me. He was focused on science, and opening the world of science, culture and music to me. I read a lot. I read a lot of adventure books. I still remember Jack London, the experience of the crew of a vessel in a storm kind of haunted me. I was reading a lot of Karl May, a German writer who described a Wild West that he had never visited. And Jules Verne of course, I read a lot.
When I was a little bit older, of course, I read Tolstoy's War and Peace. To this day I believe that I disappointed my father by being unable to complete Jean Christophe. About some books he said, "You are not a real human being before you read them." I believe I still didn't read some of them fully.
But they encouraged me in more than one way to be curious, to learn. I was maybe 14 years old when he brought me a book by Gamow about the birth and the death of our sun, which was a kind of popularized version of how the sun is burning out, and about the origins of the universe. I believe he was interested in pushing me gradually toward science, or influencing me indirectly to become a scientist.
I just heard Steve Rosenberg talking about the Nobel Prize address of Isadore Rabi, and it put shivers in my spine.
I remember my own father, which is now 91 years old, repeating to me once and again this point from Isadore Rabi's story about how he became a scientist. He said the most influential moment was that his mother repeatedly when he used to come back from school at a very early age of eight or nine asking him, "Isadore, have you asked kind of a good question today?" Not "What you have learned?" not "What you have observed?" but "Have you raised a good question today?"
It seems that in spite of differences -- Steve Rosenberg is a leading scientist and surgeon -- we are about the same age, and in different corners of the world our parents used to tell us similar stories. I disappointed my father, since in the end I was partially a dropout from high school. I was totally undisciplined.
I was highly interested in mathematics. It seemed to me to be a form of art, something very beautiful, geometry, mathematics, and the systematic way how it's built and so on. But I was somewhat bored by most of other issues that were taught at school and I became at the age -- from 13 maybe to 17, I was totally undisciplined and could not take any kind of discipline. So gradually I became a burden of the school. They asked me to go do something more productive maybe. I was -- I don't know, not hyperactive, I was a very shy introvert -- but to do something useful to work in the field, rather than spend my time in interrupting others that want to study. So I was expelled from high school in the last year. I was allowed to come to listen to the math hours and I spent the rest of the day working until I joined the army at a very early age.
Ehud Barak Interview Photo
How unusual was that on a kibbutz, to be a problem like that and get kicked out of high school?
Ehud Barak: It was a great disappointment to my father. He was one of a few adults in the kibbutz that had a university background. He highly appreciated the value of good education and he tried to convince me to take this course. I kept telling him, "Dad, when I'm right for it, I will do it. I cannot do something that I don't really feel, or identify with, and I cannot lie to myself." And he told me, "You are fleeing from yourself. There is no way that you will not learn ultimately, so why waste these precious years?" I told him, "I'm not there yet. I don't know how to explain it." It was a major disappointment that he carried with him for a very long time. It was not very usual, but not very unusual at the same time.
You know, we didn't have a school system at the time that would prepare the students for college. No matriculation. No formal systematic coverage of a certain syllabus or curriculum that will enable you to enter. It was kind of a rural, remote school system, very caring, very open, very encouraging kind of "do it your way," which is very modern today, but without kind of sets of standards that should be achieved and practically began to learn systematically only when I was adult, about 23 or 24 when I made my matriculation when I was already an operational officer in the armed forces.
So you went from the kibbutz to the army?
Ehud Barak: I joined the army at the age of 17 and a half, which was quite unusual. My mother finally agreed that I go before I was of age. I still looked like a child, like a youngster of 15 years old. At boot camp I could not even jump over the wall in the obstacles. Decades later I saw a movie with Goldie Hawn -- Private Benjamin -- how she jumped, and it reminded me of myself. I couldn't jump over the wall. I was physically kind of immature. You're carrying all the ammunition, all the equipment, it created a very tough, physically demanding experience for me. But then, during these six months of boot camp came a defining moment for me.
It was early in 1960, some night, Israeli intelligence got a hint from the CIA that two Egyptian divisions are already deployed deep into the Sinai desert very close to the Israeli border. No one knew about it, of course, and it created an immediate emergency for the whole army. Israel had a very small regular standing army and it had to deploy immediately along the border to avoid a surprise attack, which is a kind of trauma that accompanies Israel defense kind of thinking all along.
We were youngsters in the boot camp and...
There was in the unit an emergency need to spread convoys of ammunition to maybe a dozen different points along the border, some of them 50 miles away from the boot camp. And as a result of the need to prepare at the same time all the units, there was a shortage of officers or NCOs that could lead an ammunition convoy to some desert place. The boot camp trainees were asked whether someone of us know how to read a map and can lead convoy a dark night to a certain position 50 miles from here. No one responded, and it seemed to be a kind of real emergency and I thought I can. So I raised my hand and I said simply, "I can do it." I had some experience in reading a map from summer camps and summer treks where I made the point of always knowing exactly where I was. So I get acquainted to looking at the map and it seemed to me that I understand it. I can read it. I still to this day remember the eyes of the battalion commander when he released me into the darkness kind of contemplating what will happen. If I cross the border with the convoy or something else, who will be responsible? But in a way he didn't have an alternative at the moment and he sent me.
I had my own moments of doubts of course, but finally I did it. We reached the point. I learned my own lesson from it, but this experience led me to the leading commando unit of the Israeli Defense Forces, kind of equivalent of the Delta Forces here, long before the Delta Forces were established, or the British Special Air Service, after whom we adopted their slogan, "Who Dares, Wins."
Not in terms of your military career, but as an individual, how important was that moment in your life?
Ehud Barak: I believe that I already came from my childhood with the kind of feeling somehow that the fact that I'm slightly different doesn't mean that I'm worse. Or somehow -- it doesn't create -- should not kind of deter me from trying to do things. It's just a matter of fact. I cannot throw the ball through the basket so I cannot become a basketball player. But it somehow did not deter me. Somehow I came out of childhood with kind of a self-confident -- or not self-confidence in things that I cannot do, but kind of calibrated assessment of what I can do, and with a basic sense of direction of what I can do, a sense of judgment.
Maybe I felt supported by the warmth that I absorbed. When I think about it in retrospect, it was the most imprinting kind of warmth, the kind of adult care -- but not over-care -- that gradually nurtures self-confidence in a youngster. So in a way I owe it to my parents, who are now 91 and 87 respectively. Maybe without being aware of it, they gave me my basic self-confidence.
This was a defining movement in retrospect, kind of a juncture of luck. I would not be able to experience what I experienced later on without this moment, but at the moment it was an expression of seriousness. I felt in the air that something very serious was happening. I could see the stress in the eyes of our commanders when they looked among the youngsters who had just joined the army for someone who can read the map and take such a convoy.
This was not an order. You volunteered.
Ehud Barak: Yes, I volunteered.
I've volunteered many times but it seemed to me that I tried to ask myself whether I can do it. I thought, "Yeah, I think I can do it. Yeah, it might not be easy but I can do it." And at the moment that I answered I didn't realize how complicated it could be in the dark. You can see nothing. It's a plain. There is not even roads and you have to use -- to try to assess the direction -- the compass, and there is no settlement and so on, and you should count the mileage before you cross the border into Egypt! Maybe there is nothing on the border that will tell you that you are crossing the border. But it somehow reassured me. But I don't remember it as something dramatic personally. And it happened to me once and again all my life after.
Events that became major achievements, I was always kind of feeling that I can judge myself in a calibrated way without drifting into too much enthusiasm.
Ehud Barak Interview Photo
What's it like to make a decision under those circumstances? Is it instinct? Imagination? Is it a cold, hard calculation?
Ehud Barak: In battlefield situation it's a combination of responses that have been imprinted upon us. We commanders imprint them upon ourselves, but the soldiers feel it too, certain automatic responses that make everyone feel better. And then it's a swift decision. Some things happen in a split second and are not the result of huge analytical work.
On the battlefield itself, no one will move if you are not moving. I used to tell my company commanders "If I, the battalion commander, will not go to a fire position, open fire and then give commands, no one will move. And if you company commanders will not be the first one to climb to fire position, every other tank crew will find some excuse not to climb, and we have to do it the first time." You don't have time. You somehow -- I believe that many good commanders in the field just somehow can make their overall judgment very quickly. I can compare it to something in which I'm very weak but I watch it. The way that tennis players are responding. They're not calculating. If you were to write the Newtonian equations of the moving of the tennis ball, what you should have done, or not to mention the Schrödinger equations of it, you will never end it. You've got to do what should be done and you don't assess whether you should do it this way or this way, just do what should be done.
It's only on the higher level, when you command a division or a corps, that you have enough time to contemplate. But then you face the other end of the spectrum; you think you have control, but if you are honest with yourself you know that you don't have full control. It is really decided by the fighting spirit and the performance and the determination of the young company commanders, and at most, the battalion commanders in the direct line of fire.
What are you thinking when you put on a pair of overalls to make an assault on terrorists?
Ehud Barak: First of all, I had a lot of experiences where I had to change dress.
I still remember an operation where we had some of our pilots taken by the Egyptians during the war of attrition. They intercepted some of those with SAM missiles and we decided that the only way to convince the Egyptians to release them is by taking some Egyptian pilots and bring them to Israel and then suggest that we will kind of exchange them. And the only way that we found was to stop at a road leading to an Egyptian Air Force (base), back deep in the Nile Valley, by appearing as an Egyptian military police to move them from the road and to take over some pilots. I initiated such a raid and I was one of the two policemen with the motorcycles, fully dressed as an Egyptian MP with someone who talked Baladic -- kind of a street Egyptian -- much better than I could, in a much more convincing way, and we really made it. And we established a kind of check post on the road to an Egyptian Air Force base and we began to take vehicles at midnight. There was not a lot of transportation. We ended up with 40 people in some six or eight trucks and vehicles, and not a single man in uniform.
Ehud Barak Interview Photo
They were all Egyptian civilians. One had a small pistol. Years later, when I was already prime minister, I told (Egyptian President) Mubarak this story . He joked -- he was himself a commander of the Egyptian Air Force -- and he told me that Egyptian pilots are more disciplined, they are not out in the streets after midnight. They're asleep at the air force base. But he was furious even in retrospect by the kind of chutzpah that we had, to take Egyptian uniforms and motorcycles and so on.
I looked at the dressing in (airplane mechanic) overalls as just a means to heighten the surprise. We were trying to storm a jet, a Sabena airliner with some 107 hostages in it. They were being held by a group of terrorists -- two gunmen and two females with hand grenades, some ammunition, some pistols and some explosives. We realized that unless we can surprise them, so they're defending themselves a split second after they realize we're attacking them, they will have enough time to connect and activate the explosive or to throw some hand grenades at the passengers and explode the whole thing.
First I thought of taking it over at night. I used to do almost everything at night. You can come closer. But there were a lot of hesitations in the upper echelon. Moshe Dayan and the chief of staff, and even Golda Meier in the Jerusalem office all hesitated. "Maybe we can negotiate with them. Maybe they will weaken and give up." So we found ourselves having to do it in the day time. In the day time you cannot come close.
One of the generals said, "Why don't we go closer to the airplane, kind of disguise ourselves or cover ourselves as mechanics while preparing it for taking off?" And we brought -- we even took some hundred young soldiers and some adults, gave them prison kind of suits to represent the Arab relieved terrorists that are coming from the prison, so they will see that everything is okay. And we took ourselves in a kind of trolley that small car that are working in airports. We created a train and we went there with overalls and nothing but small pistols underneath and some ladders to climb it. And we trained ourselves for about half an hour, maybe 45 minutes, how to storm the airplane, and how to open the emergency door from the outside, and how to climb from the nose wheel gear into the cockpit directly. And we went to do it and I felt -- what really worried me is the possibility that we will do everything okay but in the few seconds since the beginning of the assault and the actual facing of these terrorists, they can explode the whole thing together with us, and there was no solution for it. We just had to do it in an effective way.
Were you ever afraid during any of these operations?
Ehud Barak Interview Photo
Ehud Barak Interview Photo
Ehud Barak Interview Photo
Ehud Barak Interview Photo
Ehud Barak Interview Photo
What is most important to you, and why?
Ehud Barak: The dominating question of my youth, before I found the role of super navigator, was not a question of achievement but the question of meaning. "What is the meaning of this journey?" I remember when I first read the saying of John Maynard Keynes, "In the long run we are all dead." This is something that I felt from the very beginning. In the long run we are all dead, so we have to find a meaning. The real motivation of human beings has to do with the meaning of what we are doing.
I believe that I found from early youth that meaning could be found only in something that goes beyond your own kind of frame of skin and bones, and even self-interest. If something is serving you, if you can get the ultimate kind of domestic or self-indulgent situation, it will not satisfy you, I believe -- most human beings, I know for sure about myself -- for very long. It is only through something that seems to be important, meaningful, has to do with a wider group of human beings, and leave some imprint beyond your body, and in a way, beyond your time. That makes life meaningful. And that somehow -- I was born in a kind of mobilized society as I see it in retrospect, you know, it was a society shaping. A very strong feeling, unspoken feeling, that we are facing history, that we are fulfilling the dreams of generations of Jews, especially immediately after the Holocaust in my formative years when the remnants of the Holocaust were still coming.
I still remember going with my father from the collective dining room that I mentioned in the beginning and asking -- pointing to one of these Holocaust (survivors), a young woman, that came alone from Auschwitz or Majdanek -- I do not remember -- and she was taking a loaf of bread under her hand every evening from the dining room. And I asked my father why -- Anka was her name -- why Anka is taking this loaf of bread? There will be breakfast tomorrow. There will be bread on the table. He told me what hunger passed in her life will make her to her last day on earth taking this bread. She will never -- could be convinced that tomorrow there will be bread on the table. And so we -- you know, it is to a young kid of five years old or four years and a half, it kind of haunted me since then, and later on through all these wars I realized that we --that Israel is -- that we were born about the middle of last century, slightly before. Our generation did not learn the Alamo stories of his nation in the history books. We experienced them personally. It's a formative, personal, individual experience, a formative collective experience of the Israeli society. The bringing about of a Jewish sovereign entity that can defend itself, stepping back on the stage of real history. Not as a spiritual kind of heritage but as a real way of life for a people that suffered so much. So it became the kind of mobilizing factor of my life, and it gave a certain kind of meaning that you could not think of it when you are -- have to be alert to touch the trigger a split second before someone shoots at you. You don't think about history and so on. But somehow it was a kind of shaping for the whole generation that I was a part of.
Ehud Barak Interview Photo
When I became older I told young commanders, "Right now, as young leaders, it's more complicated than in our time. We could assume that all our soldiers are the product of a society that is mobilized. We didn't even give it a thought. So we could take many aspects of their behavior, including behavior under fire, as self-evident. Now you should invest a lot of energy in convincing people that they should identify, that they give an account of themselves as a group." In a way, our society is maturing, but in our time decisions were simpler.
I believe that at a very profound level, something very similar happens in every society in every generation. The real essence of it is not about achievement. Achievement is a means for certain people -- that have this predisposition to become leaders in whatever arena -- to reach something more profound, which is meaningful. The need for meaning is something that connects them with the people they lead, and with every other human being that searches for the meaning of life, or in life. I don't know how to put it in English.
I think you just did. Thank you so much for speaking with us.
Thank you.
|
6d9939aff8e12976 | Take the 2-minute tour ×
I know some proofs require the existence of large infinite ordinals, they give the fuel that drives induction principles. An example of this is the use of ε0 to give a consistency proof of peano arithmetic.
What I would like to find is proofs that require the existence of a large finite ordinal. thank you!
share|improve this question
In set theory like system, arbitrary large finite ordinal can be proven from axiom of set theory without axioms of infinity. In arithmetic like system, you can prove the existence of arbitrary large number only with axiom related to successor. If this is what you mean then you won't get far if you attempt something weaker. – abcdxyz Apr 11 '10 at 20:05
Or may be there is another meaning to your question? – abcdxyz Apr 11 '10 at 20:07
I think the question is just asking about proofs where you have some kind of gigantic finite upper bound like Graham's number. – Harry Gindi Apr 11 '10 at 20:53
If that is the case, I would cite the proof that there exist infinitely many primes. – abcdxyz Apr 11 '10 at 20:56
As others have said, the word "require" in the title of the question and the logic tag create the apparently misleading impression that the OP is interested in a foundational system so weak that sufficiently large finite numbers do not exist! (Note that D. Zeilberger sincerely subscribes to this, at least as a philosophy; I had a fun email exchange with him which appears off of his opinions page.) The question rather seems to be: "What are some proofs where you can give an explicit, but ridiculously large, bound for something?" To me this is not so fascinating, but to each his own... – Pete L. Clark Apr 12 '10 at 7:33
5 Answers 5
This isn't addressed to logicians, but it may be of interest. I happen to know of an example in PDE that was necessary in proving the well-posedness of radial solutions of the Nonlinear Schrodinger Equation:
$$i u_{t}+\Delta u=|u|^{4}u$$
for which J. Bourgain was awarded his Fields Medal for treating. (J. Bourgain, Global well-posedness of defocusing 3D critical NLS in the radial case, JAMS 12 (1999), 145-171).
In one of the many many critical steps required in this proof, a bound on energy is required. A team (J. COLLIANDER, M. KEEL, G. STAFFILANI, H. TAKAOKA, and T. TAO) have now treated the non-radial case and make explicit the large ordinals used for bounding the energy. I quote from page 36 of their paper "Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in R^3":
"If one then runs the induction of energy argument in a direct way (rather than arguing by contradiction as we do here), this leads to very rapidly growing (but still finite) bound for M(E) for each E, which can only be expressed in terms of multiply iterated towers of exponentials (the Ackermann hierarchy). More precisely, if we use X ↑ Y to denote exponentiation X^Y, X↑↑Y :=X↑(X↑...↑X) to denote the tower formed by exponentiating Y copies of X, X↑↑↑Y :=X↑↑(X↑↑...↑↑X) to denote the double tower formed by tower-exponentiating Y copies of X, and so forth, then we have computed our final bound for M(E) for large E to essentially be M(E) ≤ C ↑↑↑↑↑↑↑↑ (CE^C). This rather Bunyanesque bound is mainly due to the large number of times we invoke the induction hypothesis Lemma 4.1, and is presumably not best possible."
share|improve this answer
Large numbers (Ackerman of Ackerman of Ackerman of ...... of something) tend to creep into modern additive combinatorics arguments due to some dark ergodic witchcraft tool which they call "PET induction" (PET = polynomial exhaustion technique), and some of its cousins. You can easily google-up the terms and find references; sadly understanding what they actually do is (at least for me) a different matter altogether.
share|improve this answer
The example I know is the 1933 Skewes' number, see
Looking at your question again, I have no idea whether this is what you wanted.
share|improve this answer
Large numbers are used in things like the busy beaver problem. However, since it has given me some good rep in the past, I once again recommend Harvey Friedman and his Enormous Numbers in Real Life. You can search Math Overflow for Harvey and see some of the posts which quote part of his article.
Gerhard "Ask Me About System Design" Paseman, 2010.04.11
share|improve this answer
Your Answer
|
f247a7b2e97ef91e | Santa Clara University
Physics department
Course Descriptions
1. Hands-On Physics!
How do scientists know what they "know"? Notions of scientific theory and experimentation are reviewed. Error analysis and instrumentation are emphasized. Included student-designed, peer-reviewed group project. (4 units)
2. Introduction to Astronomy: The Solar System
An introduction to astronomy with a particular focus on the origin and evolution of the solar system, and planets and their satellites. Topics include a brief history of the science of astronomy, telescopes and observational methods, gravitation, spectra and the sun, asteroids, comets, astrobiology, and searches for new planetary bodies and extraterrestrial life. Special emphasis is given to the Earth as a planet, with comparisons to Mars and Venus. Students should be familiar with arithmetic and basic algebra. Observational lab meets five times during the quarter. (4 units)
3. Introduction to Astronomy: The Universe
An introduction to astronomy with a particular focus on the origin and evolution of the universe, galaxies, and stars. Topics include a brief history of the science of astronomy, telescopes and observational methods, gravitation, spectra and the sun, black holes, nebulae, the big bang, and the expansion and ultimate fate of the universe. Special emphasis is given to theories of the cosmos from Stonehenge to the present. Students should be familiar with arithmetic and basic algebra. Evening observational lab meets five times during the quarter. (4 units)
4. The Physics of Dance
An exploration of the connection between the art of dance and the science of motion with both lecture/discussion sessions and movement laboratories. Topics include mass, force, equilibrium, acceleration, energy, momentum, torque, rotation, and angular momentum. Movement laboratory combines personal experience of movement with scientific measurements and analysis, in other words: "dance it" and "measure it." This is a lab science, not a dance technique course. Also listed as DANC 4. (4 units)
5. The Physics of Star Trek
Examines the physics and other science depicted in the Star Trek television shows and movies. Topics include Newton's and Einstein's physics, the Standard Model of particle physics, and the physics that underlies inertial dampers, transporter beams, warp drive, and time travel. Considers the impact on society of interplanetary and intergalactic travel, including the relationship between the space program and the advance of technology, the political ramifications of the mankind's race to space, and the implication of the discovery of extraterrestrial life on religion and faith. (4 units)
8. Introduction to Space Sciences
An introduction to space exploration and how observations from space have influenced our knowledge of Earth and of the other planets in our solar system. This is synthesized within the context of the field of astrobiology, an interdisciplinary study of the origin of the Universe and the evolution and future of life on Earth. (4 units)
9. Introduction to Earth Science
Overview of geology and its significance to man. Earthquakes, volcanism, plate tectonics and continental drift, rocks and minerals, geologic hazards, and mineral resources. Emphasis on basic geologic principles and the role of geology in today's world. (4 units)
11. General Physics I
One-dimensional motion. Vectors. Two-dimensional motion. Newtonian law of motion. Law of gravitation. Planetary motion. Work. Kinetic and potential energy. Linear momentum and impulse. Torque and rotational motion. Rotational energy and momentum. Equilibrium. Elastic deformation of solids. Density and pressure of fluids. Bernoulli’s principle. Buoyant forces. Surface tension. Lab. Prerequisites: MATH 11 or permission of the instructor. The PHYS 31/32/33 sequence and the PHYS 11/12/13 sequence cannot both be taken for credit. (5 units)
12. General Physics II
Temperature. Thermal expansion of solids and liquids. Thermal energy. Heat transfer. Specific heat. Mechanical equivalent of heat. Work and heat. Laws of thermodynamics. Kinetic theory of gases. Ideal gas law. Entropy. Vibration and wave motion. Hooke’s law. Sound. Electric charges, fields and potential. Gauss's Law. Ohm’s Law. Potential difference. Electric potential. Capacitors. Electric current. Resistance and resistivity. Electric energy and power. Kirchhoff’s Rules. RC circuits. Magnetic fields and forces. Ampere's Law. Induced EMF. Faraday's Law. Lenz's Law. Self inductance. Lab. Prerequisite: PHYS 11. The PHYS 31/32/33 sequence and the PHYS 11/12/13 sequence cannot both be taken for credit. (5 units)
13. General Physics III
RCL series circuit. Power in an AC circuit. Resonance. Transformers. Optics: reflection, refraction, mirrors, and lenses. Total internal reflection. Diffraction. Young’s double slit interference. Polarization. Optical Instruments. Relativity. Wave-particle duality. Photoelectric effect. X-rays. Pair production and annihilation. Bohr Atom. Spectra. Uncertainty principle. Quantum numbers. Radioactivity. Nuclear particles and reactions. Subnuclear particles. Lab. Prerequisite: PHYS 12. The PHYS 31/32/33 sequence and the PHYS 11/12/13 sequence cannot both be taken for credit. (5 units)
19. General Physics for Teachers
A primarily conceptual general physics course designed for future teachers. Topics covered include scientific inquiry, mechanics, gravitation, properties of matter, heat, sound, electricity and magnetism, light, relativity, atomic and nuclear physics, and astronomy. (4 units)
31. Physics for Scientists and Engineers I
Measurement. Vectors. Straight-line kinematics. Kinematics in two dimensions. Laws of inertia, mass conservation, and momentum conservation. Center-of-mass and reference frames. Force. Newtonian mechanics and its applications. Work and kinetic energy. Potential energy and energy conservation. Rotational dynamics. Statics. Includes weekly laboratory. Prerequisite: Math 11. The PHYS 31/32/33 sequence and the PHYS 11/12/13 sequence cannot both be taken for credit. (5 units)
32. Physics for Scientists and Engineers II
Simple harmonic motion. Gravitation. Kepler's laws. Fluids. Waves. Sound. Interference, diffraction, and polarization. Thermodynamics. Includes weekly laboratory. Prerequisites: Physics 31 and Math 11. (Math 12 may be taken concurrently.) The PHYS 31/32/33 sequence and the PHYS 11/12/13 sequence cannot both be taken for credit. (5 units)
33. Physics for Scientists and Engineers III
Electrostatics. Gauss's law. Potential. Capacitance. Electric current. Resistance. Kirchhoff's rules. DC circuits. AC circuits. Magnetic force. Ampere's Law. Electromagnetic induction. Includes weekly laboratory. Prerequisites: Math 12 and Physics 32. (Math 13 may be taken concurrently.) The PHYS 31/32/33 sequence and the PHYS 11/12/13 sequence cannot both be taken for credit. (5 units)
34. Physics for Scientists and Engineers IV
Special relativity. Historical development of modern physics: black body radiation, photoelectric effect, Compton scattering, X-rays, Bohr atom, DeBroglie wavelength, Heisenberg uncertainty principle. Quantum waves and particles. Schrödinger equation. Nuclear structure and decay. Particle physics. Introduction to semiconductors. Includes weekly laboratory. Prerequisite: Physics 33. (5 units)
70. Electronic Circuits for Scientists
Linear electric circuits. DC analysis, network theorems, phasor AC analysis. Diode circuits. Physics of p-n junction. Junction diodes, field-effect devices, bipolar junction transistors. Elementary amplifiers. Small-signal device models. Logic gates, digital integrated circuits, Boolean algebra, registers, counters, memory. Operational Amplifier circuits. Linear amplifier bias circuits. Includes weekly laboratory. Prerequisite: Physics 33. (5 units)
103. Analytical and Numerical Methods in Physics
Basic elements of programming in MATLAB®. Ordinary and partial differential equations. Fourier transforms and spectral analysis. Linear regression and curve fitting. Numerical integration. Stochastic methods. Selected applications include planetary motion, diffusion, Laplace and Poisson equations and waves. Weekly computer lab. Prerequisites: PHYS 33 and MATH 22 or AMTH 106. (5 units)
104. Analytical Mechanics
Calculus of variations. Hamilton’s principle. Lagrangian and Hamiltonian approaches to classical dynamics. Central force motion. Noninertial reference frames. Dynamics of rigid bodies, Selected topics in classical dynamics such as coupled oscillators, special relativity and chaos theory. Prerequisites: PHYS 31 and MATH 22 or AMTH 106. (5 units)
111. Electromagnetic Theory I
Review of vector calculus. Dirace delta function. Electrostatic fields. Work and energy. Laplace’s and Poisson’s equations. Separation of variables. Fourier’s trick. Legendre equation. Multipole expansion. Computational problems. Prerequisites: PHYS 33 and MATH 22 or AMTH 106. Co-requisite PHYS 103. (5 units)
112. Electromagnetic Theory II
Magnetostatics. Induced electromotive forces. Maxwell’s equations. Energy and momentum in electrodynamics. Electromagnetic stress tensor. Electromagnetic waves. Potential formulation. Computational problems. Dipole radiation. Prerequisite: PHYS 111. (5 units)
113. Advanced Electromagnetism and Optics
Geometric optics. Polarization and optically active media. Interferometry. Optical signal and noise in detection and communication. Interaction of light with metals, dielectrics, and atoms. Thermal radiation. Laser operation. Prerequisite: PHYS 112. (5 units)
116. Physics of Solids
Crystal structure. Phonons. Free electron theory of metals. Band theory of solids. Semiconductors. Electrical and thermal transport properties of materials. Magnetism. Superconductivity. Topics from current research literature. PHYS 116 is taught as a capstone course. Prerequisites: PHYS 120, PHYS 121, and senior standing. (5 units)
120. Thermal Physics
Laws of thermodynamics with applications to ideal and nonideal systems. Elementary kinetic theory of gases. Entropy. Classical and quantum statistical mechanics. Selected topics from magnetism and low-temperature physics. Prerequisites: PHYS 34 and PHYS 103. Recommended: PHYS 121. (5 units)
121. Quantum Mechanics I
The Schrödinger equation. The wave-function and its interpretation. One dimensional potentials. Harmonic oscillator. Methods in linear algebra including matrix operations, unitary transformations and rotations, eigenvalue problems and diagonalization. Hilbert space, observables, operators, and Dirac notation. The hydrogen atom. Prerequisites: PHYS 34 and PHYS 103. (5 units)
122. Quantum Mechanics II
Angular momentum and spin. Electrons in EM field. Addition of angular momenta. Identical particles. Time-independent perturbation theory. Fine and hyperfine structure. Time-dependent perturbation theory and its application to light-matter interaction. Fermi's golden rule. Prerequisite: PHYS 121. (5 units)
123. Quantum Mechanics III
Variational principle. WKB approximation. Scattering theory. Quantum paradoxes. Introduction to quantum computation: qubits, quantum gates and circuits, quantum teleportation, quantum algorithms, error correction codes. Quantum computer implementations. Includes weekly laboratory. Prerequisite: PHYS 122. (5 units)
141. Modern Topics in Physics
A selection of current topics in physics research. (5 units)
151. Advanced Laboratory
Laboratory-based experiments in the areas of atomic, nuclear, and quantum physics. Emphasis on in-depth understanding of underlying physics, experimental techniques, data analysis, and dissemination of results. Design and implementation of independent table-top project. Introduction to LabVIEW™. Written and oral presentations. Prerequisite: Senior standing. (5 units)
161. Introduction to Astrophysics
A survey of astronomy for science majors focused on the physics and mathematics that astronomers use to interpret observations of planets, stars, and galaxies. Topics include the kinematics of objects in the solar system, the nature of stars and their evolution, and the origin and fate of the universe. Prerequisite: PHYS 33. PHYS 34 recommended but not required. (5 units)
162. Cosmology
A survey of cosmology for science majors. Much of the course will focus on the properties of an idealized, perfectly smooth, model universe. Topics include the formulation of galaxies and clusters in an evolving universe, governing differential equations which describe the dynamics of the universe, the Benchmark Model of the universe, Dark Matter and Dark Energy, the Cosmic Microwave Background and its fluctuation spectrum, annihilation epochs and their consequences, Big Bang nucleosynthesis, and problems with the standard Big Band models and inflation theory. Prerequisites: PHYS 34 or PHYS 161. Knowledge of calculus through differential equations is assumed. (5 units)
171. Biophysics
Diffusion and dissipation in cells. Friction and inertia in biological systems. Entropic and chemical forces. Macromolecules. Molecular machines. Ion pumps. Nerve impulses. Prerequisite: PHYS 33. (5 units)
190. Senior Seminar
Advanced topics in selected areas of physics. Enrollment by permission of instructor. (2 units)
192. Physics and Society
Physics research that has a significant societal impact presented by invited speakers from academia, the private sector, and government laboratories. Students participate in discussions and write reflection papers. Prerequisite: PHYS 34. (1 unit)
198. Undergraduate Physics Research
Departmental work under close professorial direction on research in progress. Permission of the professor directing the research must be secured before registering for this course. (1-5 units)
199. Directed Reading in Physics
Detailed investigation of some area or topic in physics not covered in the regular courses; supervised by a faculty member. Permission of the professor directing the study must be secured before registering for this course. (1-5 units)
Printer-friendly format
© 2014 Santa Clara University | Department of Physics
Contact us with comments about this web site. |
8937fffeaf446530 | Take the 2-minute tour ×
What does superposition mean in quantum mechanics?
When I say $A+B=C$ (forces). I can mean push something with force $A$ + force $B$ together, and that is same as I push it with force $C$.
But when I say wavefunction $A$ + $B$ is also a solution of Schrodinger equation, what do I mean? The physics between them obviously is not same. Is it just something pure mathematical?
share|improve this question
4 Answers 4
• Math:
If you have an operator $D$ with $$D(\Psi+\Phi)=D(\Psi)+D(\Phi),$$ then if $D(\Psi)=0$ and $D(\Phi)=0$, you can also conclude that $D(\Psi+\Phi)=0$. This is the case for the Schrödinger equation, as it reads
$$D(\Psi):=(i\hbar\tfrac{\partial}{\partial t}-H)\Psi=0,$$
where $H$ is linar. For example you certainly have linearity for the derivatives: $$(f(x)+g(x))'=f'(x)+g'(x)$$ and even more so for multiplicative operators: $$V(x)\cdot (f(x)+g(x))=V(x)\cdot f(x)+V(x)\cdot g(x).$$
The books point out that the superposition is possible like that to emphasise that the probability waves don't affect each other and so this enables you to find solutions of the equation.
If, in contrast, the Schrödinger equation would read
$$D(\Psi):=(i\hbar\tfrac{\partial}{\partial t}-H)\Psi^2=0,$$
which is non-linear because of the $\Psi^2=0$, then you'd have
and from $\Phi$ and $\Psi$ being a solution ($D(\Psi)=0$ and $D(\Phi)=0$) it would not follow that $\Psi+\Phi$ is a solution too (you only get $D(\Psi+\Phi)=0+0+D(\sqrt{2\cdot\Psi\cdot\Phi})\ne0$).
• Physics:
What do you mean by "the physics between them"?
Anyway, as an illustration, if you have a function like $\Psi(x)=A\text{e}^{-(x-3)^2}$, which is a bump located around the point $x=3$, and you add it with a function $\Phi(x)=B\text{e}^{-(x-7)^2}$, which is a bump located around the point $x=7$, then you get a function $$\chi(x):=\Psi(x)+\Phi(x)=A\text{e}^{-(x-3)^2}+B\text{e}^{-(x-3)^2},$$ which has two bumps.
The wave function relate to propability densities, and if you have high probailities at the points $x=3$ for $\Phi$ and at $x=7$ for $\Phi$, then $\Psi+\Phi$ will tend to describe a situation, which has relatively high probabilities on both of these points.
share|improve this answer
A wavefunction is a fundamentally different concept to anything that exists in "classical" physics. This question deals with what a wavefunction 'looks like'.
You're asking what a superposition of wavefunctions is. You could look at it mathematically as $$|c\rangle = |a\rangle + |b\rangle$$ where $a$, $b$, and $c$ are wavefunctions (if you're unfamiliar with the notation used, take a look at this wikipedia page). But physically this doesn't correspond to the superposition of forces, or electromagnetic fields. The mathematic forms of both are similar, but the physical interpretation is quite different.
For example, if you consider the Hydrogen atom - I'm going to assume you know something about orbitals and energy levels. If not, I can explain further - and you consider an electron in the lowest energy level and call that wavefunction $\psi_{0}(\vec r)$ (or $|\psi_{0}\rangle$), and if the same electron is in the first excited energy level (Let's call is $\psi_{1}(\vec r)$ or $|\psi_{1}\rangle$). Now it is possible to have the electron is a superposed state, where the wavefunction is given by $$|\psi\rangle = |\psi_{0}\rangle + |\psi_{1}\rangle $$ What this seems to mean is that the electron is both in the lowest energy level and the first excited state at the same time. This may seem wrong, because of our intuition, but that's exactly what it means.
Usually, you wouldn't see this superposition if you observed a quantum system. This is because a superposed state will 'collapse' to one of the states that make it up with some finite probability. So you end up seeing the particle sitting definitely in one or the other energy levels. But this year's Nobel has been given to people who've managed to brilliantly circumvent this problem. You could read a little bit about it here, if you haven't already.
So to conclude - A superposition means that you have (mathematically) a sum of two wavefunctions. Physically this corresponds to nothing that you can relate to classically, which is what makes quantum mechanics weird (but awesome).
share|improve this answer
One way to think of superposition is this: If particles behave to some degree like waves in the sense that they can never be completely "squeezed down" into actual points, then the waves -- the probability functions -- can add together very much like waves on a pond. So, just as on a pond surface you could combine together large waves with crests a foot apart traveling north with small waves whose crests are inch apart traveling east, you can in principle do exactly the same thing with the probability waves of an electron.
Wave addition is surprisingly simple, incidentally, amounting to not much more than simply imposing the smaller wave onto the moving surface of the larger wave. So, while the heights of the waves at any one point will change as the two waves move, the height of the wave at that point will always be nothing more than a simple arithmetic sum of the heights that each wave would have had separately. That nice, simple arithmetic property is called linearity, and (fortunately for physicists seeking simplicity!) it can be found throughout much of physics.
In the case of the electron there is one additional constraint: A single electron can only generate a finite amount of wave action. That wave action can be split up in many different ways and into many different types of waves, but the total sum of all those waves must always add up to one "electron's worth" of wave action. So for example, just as with the pond waves, an electron wave could consist of an equal mix of large waves moving north and small waves moving south, as long as the two sets of waves always add up to "one electron" of total wave action.
Now the fun part is that when electrons are modeled as waves, those waves have a very specific meaning, one that is a bit less than intuitive. The interpretation is this: The big waves traveling north mean that if you poke hard at the wave with something like a photon, you will sometimes (half the time if the two wave types are equal in strength) find an electron moving north, rather slowly. However, the instant you find the electron by using such a poke, all of that wave interpretation "instantly" disappears. (I say "instantly" in quotes because that is a very loaded term in that context; but that's for some other answer!)
However, since there are two types of electron waves added together, that same poke is just as likely to find the electron moving east at a much faster clip, which is what the more tightly spaced eastbound wave means. Once again, if a poke finds the electron moving east, all of the wave interpretations cease to have meaning and you simply have an electron that look a lot more like a particle in terms of where it is located.
Once found, the electron becomes a candidate for creating new waves and starting the process all over again. That is what happens with conduction electrons in metals, for example. Or, alternatively, it could get captured by a heavier object such as an atom, and at that point it would cease to behave like a roaming wave.
However, even then the electron does not stop behaving like a wave. In fact, the entire discipline of chemistry amounts to a detailed mapping out of what happens when the many different waves possible for a charged electron become bound into a tight, cramped, and mostly spherical space, one in which it must argue and negotiate and continually bump into other electrons in an attempt to find its own little bit of turf. From these waves and the intransigence of electrons (called fermion behavior) to pack together tightly comes all of the rich behavior that makes matter and life possible.
share|improve this answer
Mathematically (as I think you already know) superposition means that if I evolve the quantum state $|c\rangle =|a\rangle + |b\rangle$, the result will be the same as separately evolving $|a\rangle$ and $|b\rangle$,and adding the results. That is due to the linearity of the Schroedinger equation.
This means that there esists a simple connection between the states out of which $|c\rangle$ is made and $|c\rangle$ itself. Without this, Quantum mechanics would be infinetely more difficult.
Whether this is purely mathematical is a little bit a matter of semantics. I guess most people would see the same property of the electric field as highly intuitive, but to others it would appear highly mathematical.
share|improve this answer
Your Answer
|
558979f44adfdec5 | The Uppsala Quantum Chemistry Package
(1) Unrestricted Hartree Fock (URHF)
(2) Restricted Hartree Fock (RHF)
(3) Configuration Interaction calculations including Singles and Doubles (CISD)
(4) Möller-Plesset manybody perturbation calculations to 2:nd order (MP2)
(5) Diffusion Quantum Monte Carlo (DQMC)
(6) Variational Quantum Monte Carlo (VMC)
(7) Density Functional Theory (DFT) calculations
(8) Time Dependent Density Functional Theory (TDDFT) calculations
(9) Extended Lagrangian Born-Oppenheimer Molecular Dynamics (XL-BOMD)
(10) Fast First Principles Molecular Dynamics (Fast-QMMD)
(11) OPENMP and MPI parallelization
(12) Resolution of the Identity approximation (RI-approximation)
The Uppsala Quantum Chemistry (uquantchem) package is a program designed to solve the non-relativistic Schrödinger equation for atoms and molecules using gaussian basis sets.
The program has been written in Fortran 90 by Petros Souvatzis and for the moment the following capabilities have been implemented:
The UQUANTCHEM program can be obtained free of charge by clicking on the download button below, or through Github
Unfortunately there is for the moment no guarantee that the program is free from bugs, thus be vigilant and always question weather your results seem physically sound or not.
Please cite the above article if you are going to publish results obtained with the uquantchem code. |
422ffaa553c64990 | Friday, February 03, 2017
Lindblad equation can't solve any "problems" of quantum mechanics
What I find more ludicrous is Weinberg's and Hossenfelder's suggestions that such new terms would "solve" something about what they consider mysteries, paradoxes, or problems of quantum mechanics. The first sentence of Weinberg's paper says
In searching for an interpretation of quantum mechanics we seem to be faced with nothing but bad choices.
and the following sentences repeat some of the by now standard Weinberg's critical words about Copenhagen as well as other "interpretations". The message is that this work about the extra "Lindblad terms" solve some mystery of quantum mechanics because they make something like the wave function collapse "more real". Similarly, Hossenfelder's most positive paragraph in favor of these efforts says:
I don't think that the right word is "unpopular" to describe the statement that such "fundamental decoherence" would "really solve the problem". Instead, this statement is self-evidently wrong.
Even if the extra Lindblad parameters \(\lambda_{mn}\) were nonzero and discovered, and it won't happen, we would't find any "more enlightening" version of quantum mechanics. We would still have similar equations with the same objects and with some new terms that used to be zero but now they are nonzero. If a conceptual change appeared at all, the situation would clearly get more mysterious, not less so. If someone finds neutrinos mysterious, the discovery of the nonzero neutrino masses hardly makes things easier for him. Or consider the same sentence with the QCD theta-angle, CP-violating phases, cosmological constant, or any other parameter that could have been zero but wasn't. If you couldn't understand the theory with a vanishing value of these parameters, the more complex or generalized theory with the new nonzero parameters will be even harder for you, won't it?
OK, the Lindblad equation is the following equation for a density matrix:\[
\dot \rho(t) &= -i[H,\rho(t)]+\\
&+\sum_\alpha \left[ L_\alpha \rho(t) L^\dagger_\alpha-\frac 12\left\{ L_\alpha^\dagger L_\alpha,\rho(t) \right\} \right]
\] This equation is the most general linear equation for the density matrix \(\rho(t)\) that preserves its trace (total probability) and the Hermiticity. The sum over \(\alpha\) runs over at most \(N^2-1\) new terms. Aside from the Hamiltonian matrix \(H\), one must pick many new operators \(L_\alpha\) and their conjugates to define the laws of physics.
I've divided the equation to two lines. The first line is the normal equation for the density matrix, one easily derived from the Schrödinger equation for \(\ket\psi\). The second line contains all the new terms that are zero according to contemporary physics but proposed to be nonzero by Weinberg (and others) and that should be tested by atomic clocks.
Note that \(\rho(t)\) is Hermitian, and so is therefore the left hand side. The first, normal term of the right hand side is a commutator with \(H\) which is Hermitian. For the commutator to be Hermitian as well, the coefficient has to be pure imaginary. On the contrary, the new Lindblad terms have a real coefficient.
To see what these terms are doing or "should do", it's better to look at an Ansatz for a solution – which is Weinberg's equation (3):\[
\rho_{mn}(t) = \rho_{mn}(0) \times \exp\left[ -i(E_m-E_n)t -\lambda_{mn}t \right].
\] The Ansatz was written in an energy eigenstate basis. The oscillating part of the exponent looks just like in Heisenberg's papers and the frequency is \(E_m-E_n\). The diagonal elements of \(\rho(t)\) don't change at all while the off-diagonal elements have a phase that changes with time with this frequency. What's new is the extra, exponentially decreasing factor of \(\exp(-\lambda_{mn}t)\). The off-diagonal elements don't have a constant absolute value, as they should have in unitary quantum mechanics, but they're exponentially damped with some rate \(\lambda_{mn}\) which are parameters bilinear in the matrix elements of the \(L_\alpha\) matrices in the Lindblad equation.
These off-diagonal elements of the density matrix contain the information about the relative phases of the wave function. Decoherence makes them go to zero. Here they are going to zero exponentially so it's "some kind of decoherence". Except that this is proposed to be decoherence due to new terms in the fundamental laws of physics, not due to the interaction with a subsystem labeled the "environment".
The Lindblad equation may appear as an effective equation for an open system that interacts with some environment that we can't trace so instead, we trace over it. But does it make any sense to consider it as a fundamental equation? I don't think so.
First, the modification back to \(\lambda_{mn}=0\) is just prettier and better.
I decided to place this objection at the top. The point is that the addition of all these \(\lambda_{mn}\neq 0\) damped factors is extremely artificial and it makes sense to cut this whole line of generalization by Occam's razor. If the Lindblad equation for some \(H\) and some \(L_\alpha\) has some nice properties, you may be pretty sure that the equation where you simply set \(L_\alpha=0\) is at least equally pretty. You can't lose any virtue by that. On the contrary, you lose virtues when you consider nonzero \(L_\alpha\).
Second, lots of new operators have to be defined on top of the Hamiltonian.
This is an addition to the first complaint but it may be viewed as an independent one. In normal quantum mechanics, we only determine one matrix on the Hilbert space, the Hamiltonian (or directly the S-matrix etc.). Here we must choose the Hamiltonian and about \(N^2-1\) additional operators on the Hilbert space \(L_\alpha\). Who are they? What deeper principle could possibly determine or at least constrain them?
Third, the Lindblad equation doesn't allow any Heisenberg picture at all.
The normal equation has \(L_\alpha=0\) and only contains the commutator with \(H\) in the evolution. Consequently, the evolution in time is a unitary transformation. You may pick a time-dependent basis of the Hilbert space in which the coordinates of \(\ket\psi\) or \(\rho\) will look constant and the operators such as \(x(t),p(t)\) will be time-dependent instead. This is the Heisenberg picture. With the Lindblad equation, you can't do that. There's no basis in the Hilbert space in which \(\rho(t)\) could be constant – after all, its eigenvalues are changing with time. Consequently, you won't be able to write this theory in any Heisenberg picture.
This is a far deeper problem than people like Weinberg may realize. One reason is that the equations for the operators in the Heisenberg picture basically emulate the classical evolution equations for \(x(t),p(t)\) etc. The Heisenberg picture is an elegant way to see that quantum mechanics reduces to classical physics. Now, because you can't write the Weinberg-Lindblad theory in the Heisenberg picture, you won't be able to show the right classical limit. So in fact, by adding the new Weinberg-Lindblad terms, you have made the theory less compatible with classical physics that Weinberg loves so much, not more so!
For this reason, I also suspect that you wouldn't need any atomic clocks to falsify this theory. This theory almost certainly predicts some completely wrong unobserved things for physical systems that are highly classical.
Fourth, the new terms are pretty much by definition proofs that "you are missing something"
I've mentioned that the Lindblad equation may be obtained as an effective equation if you eliminate some environment you can't track. I would argue that the converse is true, too. If you have the Lindblad equation, it shows that it's some effective equation, you have eliminated some degrees of freedom, you should return to the blackboard and see what this deeper physics that you have ignored is and where it is hiding! Weinberg is acting as he believes that the opposite is true: If he found the ugly new terms that normally emerge in effective theories only, he would be led to believe that he has found a more fundamental theory. This thinking clearly seems upside down.
OK, what are you missing when you see these new effective terms?
Bonus: the Lindblad equation is a quantum counterpart of "classical physics with Brownian random forces"
In classical deterministic physics, if you know the point \(x_i(t),p_i(t)\) in the phase space at one moment, you may calculate it at later moments \(t\), too. To explain the Brownian motion, Einstein (and the Polish guy) considered a generalization of deterministic classical physics in which the particle is also affected by classical but random forces (from the surrounding atoms) which are described by some distributions.
So even if the precise position and momentum were known at one moment, they would be unknown after some time of the Brownian motion. The peaked distribution on the phase space would get "dissolved".
This is exactly how you should think about the effect of the new Lindblad terms. They're like some random forces described in terms of the density matrix. Is something getting dissolved as well? Is the exponential decrease of the off-diagonal elements equivalent to the classical spreading of the distribution on the phase space?
You bet. It's not obvious in the basis that Weinberg chose – if the diagonal entries of \(\rho\) don't change. But if you pick any different basis, even the diagonal entries will change – they will be evolving towards values that are closer to each other and that's equivalent to the dissolution of the peaked distribution in the phase space. So there should be some molecules etc. that are causing this randomization of the pollen particle etc.!
Fifth, the new terms violate the conservation laws and/or locality
In a 1983 paper that Weinberg is aware of, Banks, Susskind, and Peskin argued that the equation violates either locality or energy-momentum conservation. Weinberg mentions this paper as well as a 1995 paper by Unruh and Wald which claims to have found some counterexamples to Banks et al. I don't quite understand what those guys have done but I am pretty sure that the counterexamples would have to be extremely artificial.
Look at the formula for \(\rho_{mn}(t)\) above. You see that if you want to preserve the energy conservation law, you really want the exponential decrease to affect the off-diagonal elements in an energy basis only. It means that the matrices \(L_\alpha\) in the extra terms must be able to determine or "calculate" what the energy eigenvectors are. If you just place some generic matrices, the conservation laws will be violated.
Sixth, CPT theorem trouble
Also, the solution to the Lindblad equation has entries that are exponentially decreasing in time. That's an intrinsic time-reversal asymmetry. Well, the legality of these solutions and the elimination of the opposite ones contradicts the existence of any CPT-symmetry. So the CPT-theorem just couldn't hold in any generalized Weinberg-Lindblad theory of this kind. You could ask whether it should hold at all.
Well, I think it should. The CPT transformation is just a continuation of the Lorentz group, the rotation of the \(t_Ez\)-plane by 180 degrees which just happens to make sense even in the Minkowski signature. So the CPT symmetry is closely linked to the Lorentz symmetry. None of this reasoning may be quite applied to the Weinberg-Lindblad theory because operations (in particular, the evolution operations) are not identified with unitary transformations in that theory etc. But I think it must lead to inconsistencies – either non-locality or a violation of the conservation laws.
I am convinced that under reasonable assumptions, it leads to problems with both – conservation laws as well as locality and/or Lorentz symmetry. One "morally non-relativistic" aspect of the Lindblad laws is that the evolution in time isn't represented just by a unitary operator while the translation i.e. evolution in space is still just a unitary transformation. So the temporal and spatial components of a four-vector (energy-momentum) seem to be qualitatively different. I would be surprised if the Lorentz invariance could be preserved by laws like that – at least if these laws are determined by some principles, instead of just by an artificial construction designed to prove me wrong.
Seventh, it just doesn't help you with any "mysteries of quantum mechanics"
But as I said, the most important problem isn't any particular technical flaw in the equations even though I do believe that the troubling observations above are flaws of the theory. The main problem is that these analyses have nothing to say about the "broader problem" that Weinberg talks about, namely his problems with the foundations of quantum mechanics.
Imagine that the new terms exist and are nonzero. So there exists an experiment, e.g. one with an atomic clock, that may show that some \(\lambda_{mn}\neq 0\). This experiment must be accurate enough – so far, similar experiments couldn't see any violation of normal quantum mechanics i.e. they couldn't have proven any \(\lambda_{mn}\neq 0\). The evidence that the new parameters are nonzero is increasing with some time – because these terms cause some intrinsic decoherence that deepens with time.
OK, so even if you said that the experiment for times \(t\gt t_C\) that are enough to see the new Weinberg-Lindblad effects proves that "things are less mysterious" because the relative phases have dropped almost to zero, it would still be true that for \(t\lt t_C\), the damping is small or negligible and the system basically follows the good old unitary rules of quantum mechanics. So the "trouble with quantum mechanics" when applied to your experiment at \(t\lt t_C\) would be exactly the same as it was before you introduced the new terms! The effect of all the new terms would be small or negligible, just like in all experiments that have been confirming unitary quantum mechanics so far.
The idea that the damping of some elements of the density matrix reduces the mystery of quantum mechanics is utterly irrational. At most, the Lindblad-Weinberg equation – if a natural version of it could exist, and I feel certain that it can't – could pick a preferred basis of the Hilbert space e.g. of your brain that would tell you which things you may feel and which you can't. Except that even in normal quantum mechanics, it's not needed. Even without decoherence, any density matrix may be diagonalized in some basis. So you may always view it as the basis that may be would-be classically perceived, if you adopt the viewpoint is that the non-vanishing off-diagonal elements clash with the perception.
And like ordinary decoherence, this Lindblad-induced decoherence doesn't actually pick one of the outcomes. Decoherence makes a density matrix diagonal but it doesn't bring it to the form \({\rm diag}(0,0,1,0,0,0)\) or a similar one.
To summarize, even if pieces of the analyses of atomic clocks are correct, the broader talk about all these things is completely wrong. None of these hypothesized new terms can "solve" any of the "problems" that Weinberg talks about. Weinberg has confined these wrong comments about the interpretation to the first paragraph of his paper. But Hossenfelder didn't confine them. Let me mention her sentences that aren't right:
Our world is never un-quantum. Our world – and both small and large objects in it – obey the laws of quantum mechanics. If you think that any observation of large objects we know disagrees with quantum mechanics, and it's the only meaning of "un-quantum" I can imagine, then you misunderstand what quantum mechanics actually does and predicts.
Decoherence is not "needed" for anything. It's just an effective re-organization of the dynamics in situations where a part of the physical system may be viewed as an environment, a re-organization that explains why the relative phases are being forgotten – and therefore one of the first steps needed to explain why a classical theory is sufficient to approximately describe everything (decoherence is needed for that because the main thing that classical physics refuses to remember are the relative quantum phases). But the forgetting still obeys the laws of quantum mechanics, it in no way contradicts it.
If "someone" is doing something else, it's just not quantum mechanics. The dynamical laws of quantum mechanics are performing the evolution of the probability amplitudes – either in the state vector, density matrix, or operators. The rest is to connect these probability amplitudes with the observations. But this isn't done by Nature. Instead, it's done by the physicist. It's the physicist who must understand what a probability amplitude or a probability means and that's what allows him to apply the calculations of the unitary evolution on objects around him. But the application of the laws isn't something that "Nature does". Instead, it is what a "physicist does". And if she doesn't know how to do it right, or if she has some religious or psychological obstacles that prevent her from doing it at all, it's her f*cking defect, not Nature's. (Note that I have used "she" and "her" in order to be politically correct.)
No comments:
Post a Comment |
253e80298728998f |
Bunimovich stadium - Wikipedia
Rather than working with (classical) individual trajectories, one can also work with (classical) invariant ensembles – probability distributions in phase space which are invariant under the billiard dynamics. Ergodicity then says that (at a fixed energy) there are no invariant absolutely continuous ensemble other than the obvious one, namely the probability distribution with uniformly distributed position and velocity direction. On the other hand, unique ergodicity would say the same thing but dropping the “absolutely continuous” – but each vertical bouncing ball mode creates a singular invariant ensemble along that mode, so the stadium is not uniquely ergodic.
Now from physical considerations we expect the quantum dynamics of a system to have similar qualitative properties as the classical dynamics; this can be made precise in many cases by the mathematical theories of semi-classical analysis and microlocal analysis. The quantum analogue of the dynamics of classical ensembles is the dynamics of the Schrödinger equation i\hbar \partial_t \psi + \frac{\hbar^2}{2m} \Delta \psi = 0, where we impose Dirichlet boundary conditions (one can also impose Neumann conditions if desired, the problems seem roughly the same). The quantum analogue of an invariant ensemble is a single eigenfunction -\Delta u_k = \lambda_k u_k, which we normalise in the usual L^2 manner, so that \int_\Omega |u_k|^2 = 1. (Due to the compactness of the domain \Omega, the set of eigenvalues \lambda_k of the Laplacian -\Delta is discrete and goes to infinity, though there is some multiplicity arising from the symmetries of the stadium. These eigenvalues are the same eigenvalues that show up in the famous “can you hear the shape of a drum?” problem.) Roughly speaking, quantum ergodicity is then the statement that almost all eigenfunctions are uniformly distributed in physical space (as well as in the energy surface of phase space), whereas quantum unique ergodicity (QUE) is the statement that all eigenfunctions are uniformly distributed. In particular:
• If quantum ergodicity holds, then for any open subset A \subset \Omega we have \int_A |u_k|^2 \to |A|/|\Omega| as \lambda_k \to \infty, provided we exclude a set of exceptional k of density zero.
• If quantum unique ergodicity holds, then we have the same statement as before, except that we do not need to exclude the exceptional set.
(In fact, quantum ergodicity and quantum unique ergodicity say somewhat stronger things than the above two statements, but I would need tools such as pseudodifferential operators to describe these more technical statements, and so I will not do so here.)
Now it turns out that for the stadium, quantum ergodicity is known to be true; this specific result was first obtained by Gérard and Leichtman, although “classical ergodicity implies quantum ergodicity” results of this type go back to Schnirelman (see also Zelditch and Colin de Verdière). These results are established by microlocal analysis methods, which basically proceed by aggregating all the eigenfunctions together into a single object (e.g. a heat kernel, or some other function of the Laplacian) and then analysing the resulting aggregate semiclassically. It is because of this aggregation that one only gets to control almost all eigenfunctions, rather than all eigenfunctions. Here is a picture of a typical eigenfunction for the stadium (from Douglas Stone’s page):
Typical stadium eigenfunction
In analogy to the above theory, one generally expects classical unique ergodicity should correspond to QUE. For instance, there is the famous (and very difficult) quantum unique ergodicity conjecture of Rudnick and Sarnak, which asserts that QUE holds for all compact manifolds without boundary with negative sectional curvature. This conjecture will not be discussed here (it would warrant an entire post in itself, and I would not be the best placed to write it). Instead, we focus on the Bunimovich stadium. The stadium is clearly not classically uniquely ergodic due to the vertical bouncing ball modes, and so one would conjecture that it is not QUE either. In fact one conjectures the slightly stronger statement:
• Scarring conjecture: there exists a subset A \subset \Omega and a sequence u_{k_j} of eigenfunctions with \lambda_{k_j} \to\infty, such that \int_A |u_{k_j}|^2 does not converge to |A|/|\Omega|. Informally, the eigenfunctions either concentrate (or “scar”) in A, or on the complement of A.
Indeed, one expects to take A to be a union of vertical bouncing ball trajectories (from Egorov’s theorem (in microlocal analysis, not the one in real analysis), this is almost the only choice). This type of failure of QUE even in the presence of quantum ergodicity has already been observed for some simpler systems, such as the Arnold cat map. Some further discussion of this conjecture can be found here. Here are some pictures from Arnd Bäcker‘s page of some eigenfunctions (displaying just one quarter of the stadium to save space) which seem to exhibit scarring:
Scarring eigenfunctions
Of course, each of these eigenfunctions has a fixed finite energy, and so these numerics do not directly establish the scarring conjecture, which is a statement about the asymptotic limit as the energy becomes infinite.
One reason this conjecture appeals to me (apart from all the gratuitous pretty pictures one can mention while discussing it) is that there is a very plausible physical argument, due to Heller and refined by Zelditch, which indicates the conjecture is almost certainly true. Roughly speaking, it runs as follows. Using the rectangular part of the stadium, it is easy to construct (high-energy) quasimodes of order 0 which scar (concentrate on a proper subset A of \Omega) – roughly speaking, these are solutions u to an approximate eigenfunction equation -\Delta u = (\lambda + O(1)) u for some \lambda. For instance, if the two horizontal edges of the stadium lie on the lines y=0 and y=1, then one can take u(x,y) = \varphi(x) \sin(\pi n y) and \lambda = \pi^2 n^2 for some large integer n and some suitable bump function \varphi. Using the spectral theorem, one expects u to concentrate its energy in the band {}[\pi^2 n^2 - O(1), \pi^2 n^2 + O(1)]. On the other hand, in two dimensions the Weyl law for distribution of eigenvalues asserts that the eigenvalues have an average spacing comparable to 1. If (and this is the non-rigorous part) this average spacing also holds on a typical band {}[\pi^2 n^2 - O(1), \pi^2 n^2 + O(1)], this shows that the above quasimode is essentially generated by only O(1) eigenfunctions. Thus, by the pigeonhole principle (or more precisely, Pythagoras’ theorem), at least one of the eigenfunctions must exhibit scarring.
[Update, Mar 28: As Greg Kuperberg pointed out, I oversimplified the above argument. The quasimode is so weak that the eigenfunctions that comprise it could in fact spread out (as per the uncertainty principle) and fill out the whole stadium. However, if one looks in momentum space rather than physical space, the scarring of the quasimode is so strong that it must persist to one of the eigenfunctions, leading to failure of QUE even if this may not quite be detectable purely in the physical space sense described above.]
The big gap in this argument is that nobody knows how to take the Weyl law (which is proven by the microlocal analysis approach, i.e. aggregate all the eigenstates together and study the combined object) and localise it to such an extremely sparse set of narrow energy bands. (Using the standard error term in Weyl’s law one can localise to bands of width O(n) around, say, \pi^2 n^2, and by using the ergodicity one can squeeze this down to o(n), but to even get control on a band of with width O(n^{1-\epsilon}) would require a heroic effort (analogous to establishing a zero-free region \{ s: \hbox{Re}(s) > 1-\epsilon\} for the Riemann zeta function). The enemy is somehow that around each energy level \pi^2 n^2, a lot of exotic eigenfunctions spontaneously appear, which manage to dissipate away the bouncing ball quasimodes into a sea of quantum chaos. This is exceedingly unlikely to happen, but we do not seem to have tools available to rule it out.
One indication that the problem is not going to be entirely trivial is that one can show (basically by unique continuation or control theory arguments) that no pure eigenfunction can be solely concentrated within the rectangular portion of the stadium (where all the vertical bouncing ball modes are); a significant portion of the energy must leak out into the two “wings” (or at least into arbitrarily small neighbourhoods of these wings). This was established by Burq and Zworski.
On the other hand, the stadium is a very simple object – it is one of the simplest and most symmetric domains for which we cannot actually compute eigenfunctions or eigenvalues explicitly. It is tempting to just discard all the microlocal analysis and just try to construct eigenfunctions by brute force. But this has proven to be surprisingly difficult; indeed, despite decades of sustained study into the eigenfunctions of Laplacians (given their many applications to PDE, to number theory, to geometry, etc.) we still do not know very much about the shape and size of any specific eigenfunction for a general manifold, although we know plenty about the average-case behaviour (via microlocal analysis) and also know the worst-case behaviour (by Sobolev embedding or restriction theorem type tools). This conjecture is one of the simplest conjectures which would force us to develop a new tool for understanding eigenfunctions, which could then conceivably have a major impact on many areas of analysis.
One might consider modifying the stadium in order to make scarring easier to show, for instance by selecting the dimensions of the stadium appropriately (e.g. obeying a Diophantine condition), or adding a potential or magnetic term to the equation, or perhaps even changing the metric or topology. To have even a single rigorous example of a reasonable geometric operator for which scarring occurs despite the presence of quantum ergodicity would be quite remarkable, as any such result would have to involve a method that can deal with a very rare set of special eigenfunctions in a manner quite different from the generic eigenfunction.
Actually, it is already interesting to see if one can find better quasimodes than the ones listed above which exhibit scarring, i.e. to improve the O(1) error in the spectral bandwidth. My good friend Maciej Zworski has offered a dinner in a good French restaurant for this precise problem, as well as a dinner in a very good French restaurant for the full scarring conjecture. (While I may not know as many three-star restaurants as Maciej, I can certainly offer a nice all-expenses-paid trip to sunny Los Angeles for anyone who achieves a breakthrough on any of the open problems listed here. ;-) ). |
c2d9d7a41c8b60e0 | Skip to main content
Chemistry LibreTexts
3: The Schrödinger Equation
• Page ID
• The discussion in this chapter constructs the ideas that lead to the postulates of quantum mechanics, which are given at the end of the chapter. The overall picture is that quantum mechanical systems such as atoms and molecules are described by mathematical functions that are solutions of a differential equation called the Schrödinger equation. In this chapter we want to make the Schrödinger equation and other postulates of Quantum Mechanics seem plausible. We follow a train-of-thought that could resemble Schrödinger's original thinking. The discussion is not a derivation; it is a plausibility argument. In the end we accept and use the Schrödinger equation and associated concepts because they explain the properties of microscopic objects like electrons and atoms and molecules. |
7253ddaaa821692b | Open access peer-reviewed chapter
Nonrelativistic Quantum Mechanics with Fundamental Environment
By Ashot S. Gevorkyan
Submitted: May 11th 2011Reviewed: November 14th 2011Published: February 24th 2012
DOI: 10.5772/35415
Downloaded: 2604
1. Introduction
Now it is obvious that quantum mechanics enters in the 21st century into a principally new and important phase of its development which will cardinally change the currently used technical facilities in the areas of information and telecommunication technologies, exact measurements, medicine etc. Indisputably, all this on the whole will change the production potential of human civilization and influence its morality. Despite unquestionable success of quantum physics in the 20th century, including creation of lasers, nuclear energy use, etc. it seems that possibilities of the quantum nature are not yet studied and understood deeply, a fortiori, are used.
The central question which arises on the way of gaining a deeper insight into the quantum nature of various phenomena is the establishment of well-known accepted criteria of applicability of quantum mechanics. In particular, the major of them is the de-Broglie criterion, which characterizes any body-system by a wave the length of which is defined asλ=/p, where λis the wavelength of the body-system, pis its momentum and is the Plank constant. An important consequence of this formula is that it assigns the quantum properties only to such systems which have extremely small masses. Moreover, it is well known that molecular systems which consist of a few heavy atoms are, as a rule, well described by classical mechanics. In other words, the de-Broglie criterion is an extremely strong limitation for occurrence of quantum effects in macroscopic systems. Till now only a few macroscopic quantum phenomena have been known, such as superfluidity and superconductivity, which are not ordinary natural phenomena but most likely extremal states of nature. Thus, a reasonable question arises, namely, how much correct is the de-Broglie criterion, or more precisely, how completely this criterion reflects the quantum properties of a system.
In order to answer this essentially important question for development of quantum physics, it is necessary to expand substantially the concepts upon which quantum mechanics is based. The necessity for generalization of quantum mechanics is also dictated by our aspiration to consider such hard-to-explain phenomena as spontaneous transitions between the quantum levels of a system, the Lamb Shift of energy levels, EPR paradox, etc. within the limits of a united scheme. In this connection it seems important to realize finally the concept according to which any quantum system is basically an open system, especially when we take into account the vacuum's quantum fluctuations [1- 3]. Specifically for a quantum noise coming from vacuum fluctuations we understand a stationary Wiener-type source with noise intensity proportional to the vacuum power Pω2/4,where ω2is the variance of the field frequencies averaged over some appropriate distribution (we assume ω=0since ωand −ωmust be considered as independent fluctuations). For example, in the cosmic background case where T=2Kwe find, correspondingly,P=1.15pW. Calculation of ω2for quantum fluctuations is not trivial because vacuum energy density diverges as ω3[3] with uniform probability distribution denying a simple averaging process unless physical cutoffs at high frequencies exist.
Thus, first of all we need such a generalization of quantum mechanics which includes nonperturbative vacuum as fundamental environment (FE) of a quantum system (QS). As our recent theoretical works have shown [4-9], this can be achieved by naturally including the traditional scheme of nonrelativistic quantum mechanics if we define quantum mechanics in the limits of a nonstationary complex stochastic differential equation for a wave function (conditionally named a stochastic Schrödinger equation). Indeed, within the limits of the developed approach it is possible to solve the above-mentioned traditional difficulties of nonrelativistic quantum mechanics and obtain a new complementary criterion which differs from de-Broglie's criterion. But the main achievement of the developed approach is that in the case when the de-Broglie wavelength vanishes and the system, accordingly, becomes classical within the old conception, nevertheless, it can have quantum properties by a new criterion.
Finally, these quantum properties or, more exactly, quantum-field properties can be strong enough and, correspondingly, important for their studying from the point of view of quantum foundations and also for practical applications.
The chapter is composed of two parts. The first part includes a general scheme of constructing the nonrelativistic quantum mechanics of a bound system with FE. In the second part of the chapter we consider the problem of a quantum harmonic oscillator with fundamental environment. Since this model is being solved exactly, its investigation gives us a lot of new and extremely important information on the properties of real quantum systems, which in turn gives a deeper insight into the nature of quantum foundations.
2. Formulation of the problem
We will consider the nonrelativistic quantum system with random environment as a closed united system QS and FE within the limits of a stochastic differential equation (SDE) of Langevin-Schrödinger (L-Sch) type:
In equation (2.1) the stochastic operator H^(x,t;{f})describes the evolution of the united system QS + FE, where {f}is a random vector forces generating the environment fluctuations. In addition, in the units =m=1the operator has the form:
where Δdenotes a Laplace operator, V(x,t;{f})describes the interaction potential in a quantum system which has regular and stochastic terms.
We will suppose that when{f}0, the system executes regular motion which is described by the regular nonstationary interaction potentialV0(x,t)=V(x,t;{f})|{f}=0. In this case the quantum system will be described by the equation:
We also assume that in the limit tthe QS passes to an autonomous state which mathematically equals to the problem of eigenvalues and eigenfunctions:
where in the (in) asymptotic state Edesignates the energy of the quantum system and, correspondingly, the interaction potential is defined by the limit:V(x)=limtV0(x,t). In the (out) asymptotic state when the interaction potential tends to the limit:V+=limt+V0(x,t), the QS is described by the orthonormal basis {Φ+(g|x)}and eigenvalues{E+g}, where g(n,m,...)designates an array of quantum numbers.
Further we assume that the solution of problem (2.4) leads to the discrete spectrum of energy and wave functions which change adiabatically during the evolution (problem (2.3)). The latter implies that the wave functions form a full orthogonal basis:
where the symbol means complex conjugation.
Finally, it is important to note that an orthogonality condition similar to (2.5) can be written also for a stochastic wave function:R3Ψstc(g|x,t;{ξ})Ψ(g'|x,t;{ξ})d3x=1, where {ξ}designates random field (definition see below ).
2.1. The equation of environment evolution
The solution of (2.1) can be represented,
Now substituting (2.6) into (2.1) with taking into account (2.3) and (2.5), we can find the following system of complex SDEs:
where the following designations are made:
Recall that in (2.7) dummy indices denote summations; in addition, it is obvious that the coefficients Agg(t)and Fgg(t;{f})are, in general, complex functions.
For further investigations it is useful to represent the function Ug(t)in the form of a sum of real and imaginary parts:
Now, substituting expression (2.8) into (2.7), we can find the following system of SDEs:
where the following designations are made:
Ordering a set of random processes{ug(t),vg(t)}, the coefficients {A(1)gg(t),A(2)gg(t)}and random forces{Fgg'(1)(t;{f}),Fgg'(2)(t;{f})}, one can rewrite the system of SDEs as:
In the system of equations (2.10) the symbol ξdescribes a random vector process represented in the following form:ξξ(...ugi...,...vgj...),(1,...i...,j...,n),where nis the total number of random components which is twice as big as the total number of quantum states. In addition, the members ai(ξ,t)in equations (2.10) are composed of the matrix elements {A(1)gg(t),A(2)gg(t)}and regular parts of matrix elements {F(1)gg(t;{f}),F(2)gg(t;{f})}while the random forces fj(t)are composed of random parts of the above matrix elements.
Assuming that random forces satisfy the conditions of white noise:
where λij=0,if ijand λiiλi0.
Now, using the system of equations (2.10) and correlation properties (2.11), it is easy to obtain the Fokker-Planck equation for the joint probability distribution of fields {ξ}(see in particular [6, 10]):
where the operator L^(n)is defined by the form:
The joint probability in (2.12) is defined by the expression:
From this definition, in particular, it follows that equation (2.12) must satisfies to the initial condition:
where t0is the moment of switching of environment influence; in addition, the coordinates ξicompose the n-dimensional non-Euclidian spaceξiΞn.
Finally, since the function P(ξ,t|ξ0,t0)has the meaning of probability distribution, we can normalize it:
where the function N(t)is the term which implements performing of the normalization condition to unit, defined by the expression:N(t)=Ξ(n)P(ξ,t|ξ0,t0)dnξ.
2.2. Stochastic density matrix method
We consider the following bilinear form (see representation (2.6)):
where the symbol ""means complex conjugation.
After integrating (2.15) by the coordinates xR3and ξΞnwith taking into account the weight function (2.13), we can find:
Now, using (2.16) we can construct an expression for a usual nonstationary density matrix [12]:
where Λg(t)=|Ug(t)|2/I(t)has the meaning of level population of the quantum state under the conditions of equilibrium between the quantum system and fundamental environment. It is easy to check that the stochastic density matrix ρstc(x,t;{ξ}|x,t;{ξ})satisfy to von Neumann equation while the reduced density matrix ρ(x,t|x,t)does not satisfies the equation. Taking into account equations (2.1), (2.13) and (2.15), we can obtain the evolution equation for reduced density matrix:
whereTrξ{...}=..., in addition [...]describes the quantum Poisson brackets which denote the commentator:[A,B]=ABBA.
It is obvious that equation (2.18) is a nonlocal equation. Taking into account (2.12), one can bring equation (2.18) to the form:
where following designations are made; ρ(x,x,t)=ρ(x,t|x,t)|t=tis a reduced density matrix, in addition, ρstc(x,x,t;{ξ})=ρstc(x,t;{ξ}|x,t;{ξ})|t=t.
Thus, equation (2.19) differs from the usual von Neumann equation for the density matrix. The new equation (2.19), unlike the von Neumann equation, considers also the exchange between the quantum system and fundamental environment, which in this case plays the role of a thermostat.
2.3. Entropy of the quantum subsystem
For a quantum ensemble, entropy was defined for the first time by von Neumann [11]. In the considered case where instead of a quantum ensemble one united system QS + FE, the entropy of the quantum subsystem is defined in a similar way:
In connection with this, there arises an important question about the behavior of the entropy of a multilevel quantum subsystem on a large scale of times. It is obvious that the relaxation process can be nontrivial (for example, absence of the stationary regime in the limitt+) and, hence, its investigation will be a difficult-to-solve problem both by analytic methods and numerical simulation.
A very interesting case is when the QS breaks up into several subsystems. In particular, when the QS breaks up into two fragments and when these fragments are spaced far from each other, we can write for a reduced density matrix of the subsystem the following expression:
Recall that the vectors yand zdescribe the first and second fragments, correspondingly.
Now, substituting the reduced density matrixρ(x,x,t)into the expression of the entropy of QS (2.20), we obtain:
where the following designations are made in expression (2.22):
Since at the beginning of evolution the two subsystems interact with each other, it is easy to show thatJ1(λ;t)1andJ2(λ;t)1, moreover, they can be fluctuated depending on the time. The last circumstance proves that the subsystems of the QS are in the entangled state. This means that between the two subsystems there arises a new type of nonpotential interaction which does not depend on the distance and size of the subsystems. In the case when subsystems 1 and 2 have not interacted, J1=J2=1and, correspondingly, S1and S2are constants denoting entropies of isolated systems.
2.4. Conclusion
The developed approach allows one to construct a more realistic nonrelativistic quantum theory which includes fundamental environment as an integral part of the quantum system. As a result, the problems of spontaneous transitions (including decay of the ground state) between the energy levels of the QS, the Lamb shift of the energy levels, ERP paradox and many other difficulties of the standard quantum theory are solved naturally. Equation (2.12) - (2.13’) describes quantum peculiarities of FE which arises under the influence of the quantum system. Unlike the de-Broglie wavelength, they do not disappear with an increase in mass of the quantum subsystem. In other words, the macroscopic system is obviously described by the classical laws of motion; however, space-times structures can be formed in FE under its influence. Also, it is obvious that these quantum-field structures ought to be interpreted as a natural continuation and addition to the considered quantum (classical) subsystem. These quantum-field structures under definite conditions can be quite observable and measurable. Moreover, it is proved that after disintegration of the macrosystem into parts its fragments are found in the entangled state, which is specified by nonpotential interaction (2.22), and all this takes place due to fundamental environment. Especially, it concerns nonstationary systems, for example, biological systems in which elementary atom-molecular processes proceed continuously [13]. Note that such a conclusion becomes even more obvious if one takes into account the well-known work [14] where the idea of universal description for unified dynamics of micro- and macroscopic systems in the form of the Fokker-Planck equation was for the first time suggested.
Finally, it is important to add that in the limits of the developed approach the closed system QS + FE in equilibrium is described in the extended spaceR3Ξn, where Ξncan be interpreted as a compactified subspace in which FE in equilibrium state is described.
3. The quantum one-dimensional harmonic oscillator (QHO) with FE as a problem of evolution of an autonomous system on the stochastic space-time continuum
As has been pointed out in the first part of the chapter, there are many problems of great importance in the field of non-relativistic quantum mechanics, such as the description of the Lamb shift, spontaneous transitions in atoms, quantum Zeno effect [15] etc., which remain unsolved due to the fact that the concept of physical vacuum has not been considered within the framework of standard quantum mechanics. There are various approaches for investigation of the above-mentioned problems: the quantum state diffusion method [16], Lindblad density matrix method [17, 18], quantum Langevin equation [19], stochastic Schrödinger equation method (see [12]), etc. Recall that representation [17, 18] describes a priori the most general situation which may appear in a non-relativistic system. One of these approaches is based on the consideration of the wave function as a random process, for which a stochastic differential equation (SDE) is derived. However, the consideration of a reduced density matrix on a semi-group [20] is quite an ambiguous procedure and, moreover, its technical realization is possible, as a rule, only by using the perturbation method. For investigation of the inseparably linked closed system QSE, a new mathematical scheme has been proposed [5-8] which allows one to construct all important parameters of the quantum system and environment in a closed form. The main idea of the developed approach is the following. We suppose that the evolution time of the combined system consists of an infinite set of time intervals with different duration, where at the end of each interval a random force generated by the environment influences the quantum subsystem. At the same time the motion of the quantum subsystem within each time interval can be described by the Schrödinger equation. Correspondingly, the equation which describes the combined closed system QSE on a large scale of time can be represented by the stochastic differential equation of Langevin–Schrödinger (L–Sch) type.
In this section, within the framework of the 1D L–Sch equation an exact approach for the quantum harmonic oscillator (QHO) model with fundamental environment is constructed. In particular, the method of stochastic density matrix (SDM) is developed, which permits to construct all thermodynamic potentials of the quantum subsystem analytically, in the form of multiple integrals from the solution of a 2D second-order partial differential equation.
3.1. Description of the problem
We will consider that the 1D QHO+FE closed system is described within the framework of the L-Sch type SDE (see equation (2.1)), where the evolution operator has the following form:
In expression (3.1) the frequency Ω(t;{f})is a random function of time where its stochastic component describes the influence of environment. For the analysis of a model of an environment a set of harmonic oscillators [21-25] and quantized field [26, 27] are often used. For simplicity, we will assume that frequency has the following form:
where f(t)is an independent Gaussian stochastic process with a zero mean andδis a shaped correlation function:
The constant of λcharacterizes power of stochastic forcef(t). Equation (2.1) with operator (3.1) has an asymptotic solution Ψ(n|x,t)in the limitt:
where n=0,1, addition; ϕ(n|x)is the wave function of a stationary oscillator and Hn(y)is the Hermitian polynomial. The formal solution of problem (2.1), (3.1)-(3.4) may be written down explicitly for arbitrary Ω(t;{f})(see [28]). It has the following form:
where the function χ(y,τ)describes the wave function of the Schrödinger equation:
for a harmonic oscillator on the stochastic space-time {y,τ}continuum. In (3.6) the following designations are made:
The random solution ξ(t)satisfies the classical homogeneous equation of an oscillator which describes the stochastic fluctuating process flowing into FE:
Taking into account (3.5) and the well-known solution of autonomous quantum harmonic oscillator (3.6) (see [28]) for stochastic complex processes which describe the 1D QHO+FE closed system, we can write the following expression:
The solution of (3.8) is defined in the extended spaceΞ=R1R{ξ}, where R1is the one-dimensional Euclidian space and R{ξ}is the functional space which will be defined below (see section 3.3). Note that wave function (3.8) (a more specific wave functional) describes the quantum subsystem with taking into account the influence of the environment. It is easy to show that complex probabilistic processes (3.8) consist of a full orthogonal basis in the space of quadratically integrable functionsL2.
Taking into account the orthogonal properties of (3.8), we can write the following normalization condition:
where the symbol ""means complex conjugation.
So, the initial L-Sch equation (2.1) - (3.1) which satisfies the asymptotic condition (3.4) is reduced to autonomous Schrödinger equation (3.6) in the stochastic space-time using the etalon differential equation (3.7). Note that equation (3.7) with taking into account conditions (3.2) and (3.3) describes the motion of FE.
3.2. The mean values of measurable parameters of 1D QHO
For investigation of irreversible processes in quantum systems the non-stationary density matrix representation based on the quantum Liouville equation is often used. However, the application of this representation has restrictions [11]. It is used for the cases when the system before switching on the interaction was in the state of thermodynamic equilibrium and after switching on its evolution is adiabatic. Below, in the frames of the considered model the new approach is used for the investigation of the statistical properties of an irreversible quantum system without any restriction on the quantities and rate of interaction change. Taking into account definition (2.15), we can develop SDM method in the framework of which it is possible to calculate various measurable physical parameters of a quantum subsystem.
Definition 1. The expression for a stochastic function:
will be referred to as stochastic density matrix. Recall that the partial SDM ρstc(m)(x,t|{ξ}|x,t|{ξ})is defined by the expression: ρstc(m)(x,t|{ξ}|x,t|{ξ})=Ψstc(m|x,t|{ξ})Ψstc(m|x,t|{ξ}).In addition, w(m)describes the level of population with the energy Em=(n+1/2)Ω0until the moment of time t0when the random excitations of FE are turned on. Integrating (3.10) over the Euclidean space R(1)with taking into account (3.9), we obtain the normalization condition for weight functions:
Below we define the mean values of various operators. Note that at averaging over the extended space Ξthe order of integration is important. In the case when the integral from the stochastic density matrix is taken at first in the space, R1and then in the functional space, R{ξ}the result becomes equal to unity. This means that in the extended space Ξall conservation laws are valid, in other words, the stochastic density matrix in this space is unitary. In the case when we take the integration in the inverse order, we get another picture. After integration over, R{ξ}the obtained density matrix describes quantum processes in the Euclidean space,R1. Its trace is, in general, not unitary, which means that the conservation laws, generally speaking, can be invalid in the Euclidean space.
Definition 2. The expected value of the operator A^(x,t|{ξ})in the quantum state mis defined by the expression:
The mean value of the operator A^(x,t|{ξ})over all quantum states, respectively, will be:
Note that the operation Trξin (3.12) and (3.13) denotes functional integration:
whereDμ(ξ)designates the measure of functional space which will be defined below.
If we wish to derive an expression describing the irreversible behavior of the system, it is necessary to change the definition of entropy. Let us remind that the von Neumann non-stationary entropy (the measure of randomness of a statistical ensemble) is defined by the following form:
where ρ(x,x;t)=Trξ{ρstc}is a reduced density matrix, γ=Ω0/λ1/3is an interaction parameter between the quantum subsystem and environment.
Let us note that the definition of the von Neumann entropy (3.15) is correct for the quantum information theory and agrees well with the Shannon entropy in the classical limit.
Definition 3. For the considered system of 1D QHO with FE the entropy is naturally defined by the form:
where the following designationρstcρstc(x,x,t;{ξ})is made.
Finally, it is important to note that the sequence of integrations first in the functional space,R{ξ}and then in the Euclidean space,R1corresponds to non-unitary reduction of the vector’s state (or non-unitary influence on the quantum subsystem).
3.3. Derivation of an equation for conditional probability of fields. Measure of functional space R{ξ}
Let us consider the stochastic equation (3.7). We will present the solution of the equation in the following form:
After substitution of (3.17) into (3.7) we can define the following nonlinear SDE:
The second equation in (3.18) expresses the condition of continuity of the function ξ(t)and its first derivative at the moment of timet=t0. Using the fact that the function η(t)describes a complex-valued random process, the SDE (3.18) may be presented in the form of two SDE for real-valued fields (random processes). Namely, introducing the real and imaginary parts ofη(t):
the following system of SDEs can be finally obtained for the fieldsη(t)η(u1,u2):
The pair of fields (u1,u2)in this model is not independent because their evolution is influenced by the common random forcef(t). This means that the joint probability distribution of fields can be represented by the form:
which is a non-factorable function. After differentiation of functional (3.21) with respect to time and using SDEs (3.18) and correlation properties of the random force (3.3), as well as making standard calculations and reasonings (see [29,30]), we obtain for a distribution of fields the following Fokker-Planck equation:
with the initial condition:
Thus, equation (3.22)-(3.23) describes the free evolution of FE.
Now, our purpose consists in constructing the measure of functional space, which is a necessary condition for further theoretical constructions. The solution of equation (3.22)-(3.23) for small time intervals can be presented in the form:
So, we can state that the evolution of fields (u1,u2)in the functional space R{ξ}is characterized by regular displacement with the velocity (u12u22+Ω02)against the background of Gaussian fluctuations with the diffusion valueλ. The infinitely small displacement of the trajectory η(t)in the space R{ξ}is determined by expression [30]:
As follows from expression (3.26), the trajectory is continuous everywhere, and, correspondingly, the condition η(t+Δt)|Δt0=η(t)is valid. However, expression (3.26) is undifferentiable everywhere owing to the presence of a term which is of the order=Δt1/2. If we divide the time into small intervals, each of which being equal toΔt=t/N, whereN, then expression (3.25) can be interpreted as a probability of transition from ηkη(tk)to ηk+1η(tk+1)during the time Δtin the process of Brownian motion. With consideration of the above, we can construct probability of fields' change on finite intervals of time or the measure of the space, R{ξ}(see [4]):
where Dμ(η0)=δ(u1u01)δ(u2u02)du1du2(see condition (3.25)).
3.4. Entropy of the ground state of 1D QHO with fundamental environment
For simplicity we will suppose that w(0)=1and, correspondingly, w(m)=0for all quantum numbers m1(see expression (3.10) ). In this case the SDM (3.10) with consideration of expressions (3.8), (3.14) and (3.16) may be represented by the following form:
where the following designation ρstc(0)(x,x,t|{ξ})ρstc(0)(x,t,{ξ}|x,t,{ξ})|t=tis made.
Now, we can calculate the reduced density matrix:ρ0(x,x,t)=Trξ{ρstc(0)(x,x,t|{ξ})}. Using expressions for the continuous measure (3.27) and stochastic density matrix (3.28) we can construct the corresponding functional integral which can be further calculated by the generalized Feynman-Kac formula (see Appendix 4.1, [6]):
In expression (3.29) the function Q0(u1,u2,t)is a solution of the equation:
which satisfies the following initial and boundary conditions:
Let us consider the expression for the entropy (3.17). Substituting (3.29) into (3.17) we can find:
After conducting integration in the space R1in (3.33), it is easy to find the expression:
where the following designations are made:
Similarly, as in the case with (3.29), using expressions (3.34) it is possible to calculate the functional trace in the expressionNα(t):
where the function Qα(u1,u2,t)is the solution of the equation:
Recall that border conditions for (3.36) are similar to (3.31). Besides, if we assume that α=0in (3.35), we will obtain the normalization functionN0(t). After calculation of the function Qα(u1,u2,t)we can also calculate the functionDα(u1,u2,t)αQα(u1,u2,t). In particular, it is easy to obtain an equation forDα(u1,u2,t)by differentiation of equation (3.36) with respect to α:
which is solved by initial and border conditions of type (3.31).
Introducing the designationD0(u1,u2,t)=Dα(u1,u2,t)|α=0, it is possible to find the expression:
Using (3.38) we can write the final form of the entropy of «ground state» in the limit of thermodynamics equilibrium:
It is simple to show that in the limit γentropy aspires to zero.
Thus, at the reduction ρstc(x,x,t|{ξ})ρ(x,x,t)information in a quantum subsystem is lost, as a result of which the entropy changes, too. Let us remind that usually the entropy of a quantum subsystem at environment inclusion grows, however, in the considered case the behavior of the entropy depending on the interaction parameter γcan be generally nontrivial.
3.5. Energy spectrum of a quantum subsystem
The energy spectrum is an important characteristic of a quantum system. In the considered case we will calculate the first two levels of the energy spectrum in the limit of thermodynamic equilibrium. Taking into account expressions (3.12) and (3.28) for the energy of the «ground state», the following expression can be written:
where the operator:
describes the Hamiltonian of 1D QHO without an environment.
Substituting (3.41) in (3.40) and after conducting simple calculations, we can find:
where the following designations are made:
In expression (3.43) the stationary solution Q0(u¯1,u¯2,γ)=limt+Q0(u¯1,u¯2,t)is a scaling solution of equation (3.30) or (3.36) for the case whereα=0. Similarly, it is possible to calculate the average value of the energy of any excited state. In particular, the calculation of the energy level of the first excited state leads to the following expression:
in addition:
In expression (3.45) the stationary solutionQ1(u¯1,u¯2,γ)=limt+Q1(u¯1,u¯2,t)is a scaling solution of equation (3.36)) for the case whereα=1.
Figure 1.
The first two energy levels of quantum harmonic oscillator without of FE (see quantum numbers n ¯ = 0 , 1 , .. ) and correspondingly with consideration of relaxation in the FE (see quantum numbers n = 0 , 1 , .. ).
As obviously follows from expressions (3.42)-(3.46), the relaxation effects lead to infringement of the principle of equidistance between the energy levels of a quantum harmonic oscillator Fig.1. In other words, relaxation of the quantum subsystem in fundamental environment leads to a shift of energy levels like the well-known Lamb shift.
3.6. Spontaneous transitions between the energy levels of a quantum subsystem
The question of stability of the energy levels of a quantum subsystem is very important. It is obvious that the answer to this question may be received after investigation of the problem of spontaneous transitions between the energy levels. Taking into account (3.4) and (3.8), we can write an expression for the probability of spontaneous transition between two different quantum states:
where the wave function Ψ(m|x,t)describes a pure state.
It is obvious that in the considered formulation of the problem there might occur transitions between any energy levels, including transitions from the «ground state» to any excited states. Using expression (3.47), we can calculate the spontaneous decay of every quantum state. In particular, if w(0)=1and w(n)0for anym1, the probability of transition from the «ground state» to all other excited states may be calculated as follows:
In (3.48) Σ0characterizes the population of the «ground state» in the limit of equilibrium thermodynamics. The first two nonzero probabilities of spontaneous transitions are calculated simply (see Appendix 4.2):
Let us note that in expressions (3.48) and (3.49) the functions σ0(u¯1,u¯2,γ)and σ2(u¯1,u¯2,γ)are solutions of the equation:
Comparing expressions (3.48) and (3.49) with taking into account the fact that equation (3.50) for a different number nhas different solutions, σnσmifnm, we can conclude that the detailed balance of transitions between different quantum levels is violated, i. e.Δ02Δ20. Also, it is obvious that transitions between the quantum levels are possible if their parities are identical.
3.7. Uncertainty relations, Weyl transformation and Wigner function for the ground state
According to the Heisenberg uncertainty relations, the product of the coordinate and corresponding momentum of the quantum system cannot have arbitrarily small dispersions. This principle has been verified experimentally many times. However, at the present time for development of quantum technologies it is very important to find possibilities for overcoming this fundamental restriction.
As is well-known, the dispersion of the operator A^iis determined by the following form:
In the considered case the dispersion of the operator at the arbitrary time tin the «ground state» can be calculated by the following expression:
Using expression (3.52), we can calculate the dispersions of operators, the coordinate, x^and momentum,p^correspondingly:
The dispersions of operators at the moment of timet0, when the interaction with the environment is not switched on, is described with the standard Heisenberg relation: Δx^(t)Δp^(t)|t=t0=1/2.The uncertainty relation for the large interval of time when the united system approaches thermodynamic equilibrium can be represented in the form:
where average values of operators x^(γ)and p^(γ)can be found from (3.53) and (3.54) in the limitt+.
It is obvious that expressions for operator dispersions (3.53)-(3.54) are different from Heisenberg uncertainty relations and this difference can become essential at certain values of the interaction parameterγ. The last circumstance is very important since it allows controlling the fundamental uncertainty relations with the help of the γparameter.
Definition 4. We will refer to the expression:
as stochastic Wigner function and, correspondingly, to Wstc(m|p,x,t;{ξ})as partial stochastic Wigner function. In particular, for the partial stochastic Wigner function the following expression may be found:
Using the stochastic Wigner function, it is possible to calculate the mean values of the physical quantity, which corresponds to the operatorA^:
where the stochastic function a(p,x,t;{ξ})is defined with the help of a Weyl transformation of the operatorA^:
Now we can construct a Wigner function for the «ground state»:
As one can see, function (3.61) describes distribution of the coordinate xand momentum pin the phase space. The Wigner stationary distribution function can be found in the limit of the stationary processes
W(0)(x,p,γ)=limt+W(0)(x,p,t). It is important to note that in thesimilar to regular case after integration of the stochastic function Wstc(m|p,x,t;{ξ})over the phase space; it is easy to get the normalization condition:
Recall that for the Wigner function (3.61) in the general case the normalization condition of type (6.12) is not carried out.
3.8. Conclusion
Any quantum system resulting from the fact that all beings are immersed into a physical vacuum is an open system [1-3]. A crucially new approach to constructing the quantum mechanics of a closed non-relativistic system QS+FE has been developed recently by the authors of [5-8], based on the principle of local equivalence of Schrodinger representation. More precisely, it has been assumed that the evolution of a quantum system is such that it may be described by the Schrödinger equation on any small time interval, while the motion as a whole is described by a SDE for the wave function. However, in this case there arises a non-simple problem of how to find a measure of the functional space, which is necessary for calculating the average values of various parameters of the physical system.
We have explored the possibility of building the non-relativistic quantum mechanics of a closed system QS+FE within the framework of one-dimensional QHO which has a random frequency. Mathematically, the problem is formulated in terms of SDE for a complex-valued probability process (3.1) defined in the extended spaceR1R{ξ}.The initial SDE for complex processes is reduced to the 1D Schrödinger equation for an autonomous oscillator on a random space-time continuum (3.6). For this purpose the complex SDE of Langevin type has been used. In the case when random fluctuations of FE are described by the white noise correlation function model, the Fokker-Plank equation for conditional probability of fields is obtained (3.22)-(3.23) using two real-valued SDE for fields (3.20). With the help of solutions of this equation, a measure of the functional space R{ξ}is constructed (3.27) on infinitely small time intervals (3.24).In the context of the developed approach representation of the stochastic density matrix is introduced, which allows perform an exact computation scheme of physical parameters of QHO (of a quantum subsystem) and also of fundamental environment after relaxation under the influence of QS. The analytic formulas for energies of the «ground state» and for the first excited state with consideration of shift (like the Lamb shift) are obtained. The spontaneous transitions between various energy levels were calculated analytically and violation of symmetry between elementary transitions up and down, including spontaneous decay of the «ground state», was proved. The important results of the work are the calculation of expressions for uncertainty relations and Wigner function for a quantum subsystem strongly interacting with the environment.
Finally, it is important to note that the developed approach is more realistic because it takes into account the shifts of energy levels, spontaneous transitions between the energy levels and many other things which are inherent to real quantum systems. The further development of the considered formalism in application to exactly solvable many-dimensional models can essentially extend our understanding of the quantum world and lead us to new nontrivial discoveries.
4. Appendix
4.1. Appendix 1
Theorem. Let us consider a set of random processes ξ{ξ1,ξ2,...ξn}satisfying the set of SDE:
so that the Fokker-Planck equation for the conditional transition probability density:
is given by the equation:
ξiare assumed to be Markovian processes and satisfy the condition ξ(t0)ξ0.At the same time function (4.1.2) gives their exhaustive description:
where P ( n ) is the density of the probability that the trajectory ξ ( t ) would pass through the sequence of intervals [ ξ 1 , ξ 1 + d ξ 1 ] , .... [ ξ n , ξ n + d ξ n ] at the subsequent moments of time t 1 t 2 ... t n , respectively.
Under these assumptions we can obtain the following representation for an averaging procedure:
where d ξ = d ξ 1 ... d ξ n and the function Q ( ξ , ξ ′ , t ) is a solution of the following parabolic equation:
which satisfies the following initial and boundary conditions:
where | | ... | | is a norm in R n .
Proof. The proof is performed formally under the assumption that all the manipulations are legal. We will expand into the Taylor series the quantity under the averaging in the left-hand side of (4.1.5):
The designations V1(τ)V1(ξ(τ),ξ(t))and V2(t)V2(ξ(t))are introduced in (4.1.9) for brevity. Using the Fubini theorem, we can represent the averaging procedure in (4.1.9) as integration with the weight P(n)from (4.1.4):
Changing, where it is necessary, the order of integration, we can obtain the following representation for the n-th moment:
where the countable set of functions Qm(ξ,ξ,t)is determined from the recurrence relations:
i.e. the function Q0is, in fact, independent ofξ. Upon substitution of (4.1.10) into (4.1.8) we insert the summation procedure under the integration sign and then, changing the order of double summation, get the expression:
The representation (4.1.5) is thus obtained.
It remains to prove that the function Qfrom (4.1.13) is a solution of the problem (4.1.6) - (4.1.7). Using (4.1.14) and (4.1.11) we can easily show that Qsatisfies the integral equation:
Taking into account the fact that Q0satisfies equation (4.1.3) with the initial and border conditions (4.1.7) and also that it is an integrable function, it is easy to deduce from equation (4.1.15) that theQfunction coincides with the solution of the problem (4.1.6)-(4.1.7). Thus, the theorem is proved.
4.2. Appendix 2
Let us consider the bilinear form:
which can be represented,taking into account expressions (3.4) and (3.8), by the following form:
After conducting functional integration of the expression (n,m,|x,t;{ξ})by the generalized Feynman-Kac formula (see Appendix 4.1), it is possible to find:
where Χn(u1,u2,t)is a solution of the complex equation:
The solution of equation (4.4) is useful to represent in the following form:
By substituting (4.2.5) into equation (4.2.4), it is possible to find the following two real-value equations for the real and complex parts of solution:
The system of equations is symmetric in regard to the replacements: σnχnandχnσn. In other words, it means that for the solution σn(u1,u2,t)it is possible to write the following equation:
Accordingly, for a complex solution Χn(u1,u2,t)we can write the expression:
Now it is possible to pass to the calculation of the amplitude of transition between different quantum states. For simplicity we will compute the first two probabilities of transitions: Δ02andΔ20. Integrating the expression (0,2,|x,t})over xwith taking into account result (4.2.8), it is easy to find:
where σ0(u¯1,u¯2,γ)is the scaled solution of equation (4.2.7) in the limit t+,in addition:
In a similar way it is possible to calculate the transition matrix elementS20(γ):
As follows from expressions (4.2.9), (4.2.10) and (4.2.11), in the general case S02(γ)S20(γ).
This Chapter was prepared and written with the kind help of Ms. Elena Pershina.
How to cite and reference
Link to this chapter Copy to clipboard
Cite this chapter Copy to clipboard
Ashot S. Gevorkyan (February 24th 2012). Nonrelativistic Quantum Mechanics with Fundamental Environment, Theoretical Concepts of Quantum Mechanics, Mohammad Reza Pahlavani, IntechOpen, DOI: 10.5772/35415. Available from:
chapter statistics
2604total chapter downloads
More statistics for editors and authors
Access personal reporting
Related Content
This Book
Next chapter
Non Commutative Quantum Mechanics in Time - Dependent Backgrounds
By Antony Streklas
Related Book
First chapter
Measurement in Quantum Mechanics: Decoherence and the Pointer Basis
By Anu Venugopalan
More about us |
12a3be1dc72c68ff | fredag 29 juli 2016
Secret of Laser vs Secret of Piano
There is a connection between the action of a piano as presented in the sequence of posts The Secret of the Piano and a laser (Light Amplification by Stimulated Emission of Radiation), which is remarkable as an expression of a fundamental resonance phenomenon.
To see the connection we start with the following quote from Principles of Lasers by Orazio Svelto:
• There is a fundamental difference between spontaneous and stimulated emission processes.
• In the case of spontaneous emission, the atoms emit e.m waves that has no definite phase relation with that emitted by another atom...
• In the case of stimulated emission, since the process is forced by the incident e.m. wave, the emission of any atom adds in phase to that of the incoming wave...
A laser hus emits coherent light as electromagnetic waves all in-phase, and thereby can transmit intense energy over distance.
The question is how the emission/radiation can be coordinated so that the e.m. waves from many/all atoms are kept in-phase. Without coordination the emission will become more or less out-of-phase resulting in weak radiation.
The Secret of the Piano reveals that the emission from the three strings for each note in the middle register, which may have a frequency spread of about half a Herz, are kept in phase by interacting with a common soundboard through a common bridge in a "breathing mode" with the soundboard/bridge vibrating with half a period phase lag with respect to the strings. The breathing mode is initiated when the hammer feeds energy into the strings by a hard hit.
In the breathing mode strings and soundboard act together to generate an outgoing sound from the soundboard fed by energy from the strings, which has a long sustain/duration in time, as the miracle of the piano.
If we translate the experience from the piano to the laser, we understand that laser emission/radiation is (probably) kept in phase by interaction with a stabilising half a period out-of-phase forcing corresponding to the soundboard, while leaving part of the emission to strong in-phase action on a target.
An alternative to quick hammer initiation is in-phase forcing over time, which requires a switch from input to output by half a period shift of the forcing.
We are also led to the idea that black body radiation, which is partially coherent, is kept in phase by interaction with a receiver/soundboard. Without receiver/soundboard there will be no radiation. It is thus meaningless to speak about black body radiation into some vacuous nothingness, which is often done based on a fiction of "photon" particles being spitted out from a body even without receiver, as physically meaningless as speaking into the desert.
torsdag 28 juli 2016
New Quantum Mechanics 10: Ionisation Energy
Below are sample computations of ground states for Li1+, C1+, Ne1+ and Na1+ showing good agreement with table data of first ionisation energies of 0.2, 0.4, 0.8 and 0.2, respectively.
Note that computation of first ionisation energy is delicate, since it represents a small fraction of total energy.
onsdag 27 juli 2016
New Quantum Mechanics 9: Alkaline (Earth) Metals
The result presentation continues below with alkaline and alkaline earth metals Na (2-8-1), Mg (2-8-2), K (2-8-8-1), Ca (2-8-8-2), Rb (2-8-18-8-1), Sr (2-8-18-8-2), Cs (2-8-18-18-8-1) and Ba (2-8-18-18-8-2):
New Quantum Mechanics 8: Noble Gases Atoms 18, 36, 54 and 86
The presentation of computational results continues below with the noble gases Ar (2-8-8), Kr (2-8-18-8), Xe (2-8-18-18-8) and Rn (2-8-18-32-18-8) with the shell structure indicated.
Again we see good agreement of ground state energy with NIST data, and we notice nearly equal energy in fully filled shells.
Note that the NIST ionization data does not reveal true shell energies since it displays a fixed shell energy distribution independent of ionization level, and thus cannot be used for comparison of shell energies.
New Quantum Mechanics 7: Atoms 1-10
måndag 25 juli 2016
New Quantum Mechanics 6: H2 Molecule
• kernel distance = 1.44
söndag 24 juli 2016
New Quantum Mechanics 5: Model as Schrödinger + Neumann
This sequence of posts presents an alternative Schrödinger equation for an atom with $N$ electrons starting from a wave function Ansatz of the form
• $\psi (x,t) = \sum_{j=1}^N\psi_j(x,t)$ (1)
as a sum of $N$ electronic complex-valued wave functions $\psi_j(x,t)$, depending on a common 3d space coordinate $x$ and a time coordinate $t$, with non-overlapping spatial supports $\Omega_j(t)$ filling 3d space, satisfying for $j=1,...,N$ and all time:
• $i\dot\psi_j + H\psi_j = 0$ in $\Omega_j$, (2a)
• $\frac{\partial\psi_j}{\partial n} = 0$ on $\Gamma_j(t)$, (2b)
where $\Gamma_j(t)$ is the boundary of $\Omega_j(t)$, $\dot\psi =\frac{\partial\psi}{\partial t}$ and $H=H(x,t)$ is the (normalised) Hamiltonian given by
with $V_k(x)$ the repulsion potential corresponding to electron $k$ defined by
and the electron wave functions are normalised to unit charge of each electron:
• $\int_{\Omega_j(t)}\psi_j^2(x,t) dx=1$ for $j=1,..,N$ and all time. (2c)
The differential equation (2a) with homogeneous Neumann boundary condition (2b) is complemented by the following global free boundary condition:
• $\psi (x,t)$ is continuous across inter-electron boundaries $\Gamma_j(t)$. (2d)
The ground state is determined as a the real-valued time-independent minimiser $\psi (x)=\sum_j\psi_j(x)$ of the total energy
• $E(\psi ) = \frac{1}{2}\int\vert\nabla\psi\vert^2\, dx - \int\frac{N\psi^2(x)}{\vert x\vert}dx+\sum_{k\neq j}\int V_k(x)\psi^2(x)\, dx$,
under the normalisation (2c), the homogeneous Neumann boundary condition (2b) and the free boundary condition (2d).
In the next post I will present computational results in the form of energy of ground states for atoms with up to 54 electrons and corresponding time-periodic solutions in spherical symmetry, together with ground state and dissociation energy for H2 and CO2 molecules in rotational symmetry.
In summary, the model is formed as a system of one-electron Schrödinger equations, or electron container model, on a partition of 3d space depending of a common spatial variable and time, supplemented by a homogeneous Neumann condition for each electron on the boundary of its domain of support combined with a free boundary condition asking continuity of charge density across inter-element boundaries.
We shall see that for atoms with spherically symmetric electron partitions in the form of a sequence of shells centered at the kernel, the homogeneous Neumann condition corresponds to vanishing kinetic energy of each electron normal to the boundary of its support as a condition of separation or interface condition between different electrons meeting with continuous charge density.
Here is one example: Argon with 2-8-8 shell structure with NIST Atomic data base ground state energy in first line (526.22), the computed in second line and the total energies in the different shells in three groups with kinetic energy in second row, kernel potential energy in third and repulsive electron energy in the last row. Note that the total energy in the fully filled first (2 electrons) and second shell (8 electrons) are nearly the same, while the partially filled third shell (also 8 electrons out of 18 when fully filled) has lower energy. The color plot shows charge density per unit volume and the black curve charge density per unit radial increment as functions of radius. The green curve is the kernel potential and the cyrano the total electron potential. Note in particular the vanishing derivative of charge density/kinetic energy at shell interfaces.
lördag 2 juli 2016
New Quantum Mechanics 4: Free Boundary Condition
This is a continuation of previous posts presenting an atom model in the form of a free boundary problem for a joint continuously differentiable electron charge density, as a sum of individual electron charge densities with disjoint supports, satisfying a classical Schrödinger wave equation in 3 space dimensions.
The ground state of minimal total energy is computed by parabolic relaxation with the free boundary separating different electrons determined by a condition of zero gradient of charge density. Computations in spherical symmetry show close correspondence with observation, as illustrated by the case of Oxygen with 2 electrons in an inner shell (blue) and 6 electrons in an outer shell (red) as illustrated below in a radial plot of charge density showing in particular the zero gradient of charge density at the boundary separating the shells at minimum total energy (with -74.81 observed and -74.91 computed energy). The green curve shows truncated kernel potential, the magenta the electron potential and the black curve charge density per radial increment.
The new aspect is the free boundary condition as zero gradient of charge density/kinetic energy. |
280517f7dfe7b5ed | Saturday, June 10, 2017
Turok's bogus criticism of Hartle-Hawking, Vilenkin calculable big bangs
In his blog post You can't smooth the big bang, Tetragraviton mentions a string group meeting at the Perimeter Institute where an anti-string pundit – who also happens to be the current director of the Perimeter Institute – led the debate about "why the Hartle-Hawking and Vilenkin pictures of the big bang are equivalent and wrong".
The discussion was revolving around their 5-weeks-old preprint
No smooth beginning for spacetime.
When Feldbrugge, Lehners, and Turok released that paper, I saw the title and it looked fine and unsurprising (some quantities grow big near the Big Bang and the initial singularity in the Lorentzian causal diagram is basically unavoidable). Well, I surely wasn't aware of the fact that they claim to find a general problem with the Hartle-Hawking or Vilenkin approach to the wave function of the Universe, i.e. the initial conditions.
OK, so Mr Director wasn't satisfied with giving nonsensical negative monologues about the inflationary cosmology and string theory. He has added the Hartle-Hawking paradigm, too. And Tetragraviton seems to be an obedient, 100% corrupt employee of Mr Neil Turok's so he presented his rant totally uncritically.
OK, Vilenkin proposed that the early Universe – when its radius or curvature radius was very small – could have been created from nothing via the "tunneling from nothing". Alternatively, Hartle and Hawking proposed the paradigm that an early Universe whose slice looks like a 3-sphere may be continuously continued through the Euclidean spacetime to a point and it is smooth around that point. By continuing some most natural smooth conditions of the path integral around that initial point, one may calculate the preferred, Hartle-Hawking wave function on any sphere, including the finite ones.
It sounds plausible to me that when these general paradigms are done properly in a complete theory of quantum gravity, they are equivalent. But I don't think that Turok and pals have presented evidence that both of these pictures, and especially Hartle-Hawking, are dysfunctional.
In this business, people have encountered puzzles concerning the signs of the growing or decreasing terms in the exponential defining the path integral. And the continuation from the Minkowski to the Euclidean space often requires one to choose a contour in the complex \(N\)-plane (where \(N\) is the lapse function, a time interval) and there's no known universal rule to do it right.
Let me point out that the Turok et al. paper has one followup at this point,
The Real No-Boundary Wave Function in Lorentzian Quantum Cosmology,
by Hartle and four co-authors. They focus on the criticisms by Turok et al. and repeat that the Hartle-Hawking story is just fine. Why do they arrive at different conclusions?
Well, they use different contours. Turok et al. use a half-infinite contour, Hartle et al. use an infinite contour going along the whole real axis. As a consequence, Hartle et al. may extract a wave function that actually solves the Hartle-DeWitt equation, while Turok et al. don't end up with a solution to this "simplified Schrödinger equation in quantum gravity". Instead, the reduced contour of Turok et al. produces a Green's function for that equation.
That's too bad – what the alternative proposal by Turok et al. gives you doesn't solve "what replaces the Schrödinger equation in quantum gravity" but something that violates the equation. So it is not really a good candidate and should be abandoned. The Turok et al. calculation unsurprisingly leads them to focus on different saddle points than those of Hartle et al. – in fact, the Turok et al. saddle points make it impossible to get cosmological predictions.
You may see the general misunderstanding of the "logic of the derivation" on the side of Turok et al. The logic of the path integral is that Hartle and Hawking found a clever way to find a new cosmologically relevant solution to the Wheeler-DeWitt equation. Any clever trick using any clever contour or continuation of the signs is OK as long as the result really solves the desired equation.
In the most schematic form, the Wheeler-DeWitt equation is simply\[
H \Psi = 0.
\] It is like the Schrödinger equation except that the term \(i\hbar \partial \Psi / \partial t\) is missing. It has to be missing because in general relativity-based gravity, you don't have any universally well-defined coordinate \(t\). So you cannot define the derivative, either. Instead, the time \(t\) within quantum gravity has to be extracted as a value of an observable, e.g. from the density of matter or the position of hands on a clock, and when you do so, the time derivative term becomes just one part of the Hamiltonian term.
OK, so Hartle and friends have a solution to the equation that seems to be a verifiable solution and has some other desired characteristics. Turok only have a wrong candidate for such a solution, derived from a badly chosen contour etc. But the fact that the Turok "solution" is wrong doesn't mean that all other solutions are wrong.
At this place, I can't resist to mention that Turok's criticism seems analogous to many creationists' criticisms of Darwin's evolution. These critics sometimes create their own "plausible" model how species could have evolved, and they find out that it was too slow or otherwise unsatisfactory. However, they seem to ignore the fact that their detailed scenario isn't necessarily correct and Nature could have taken – and may actually be argued to have taken – a different path that simply works. For example, the mutation rate could have temporarily increased because the animals that participated in this speedup had some advantages. Creationists are just closed-minded about the existence of all such "simply clever" tricks. Turok et al. are analogous to the creationists. Their first guess doesn't work well – so they conclude that the whole paradigm, discovered by someone else, is wrong. But it doesn't follow. In particular, everything that works and is valuable was invented by someone else, while everything that sucks was proposed by Turok. One must remember that these two groups of ideas are disjoint, not identical.
The Hartle-Hawking paradigm has only been semi-successfully applied to some truncated, semiclassical, minisuperspace approximations of quantum gravity. At the end, I believe that someone will figure out how to do analogous things in string/M-theory properly, and she may figure out the deepest questions about the initial state of the Universe and maybe even the choice of the right vacuum or vacua from the landscape.
By the way, if I had read the abstract of the paper by Turok et al. five weeks ago, I would probably get provoked by the statement
We argue that the Lorentzian path integral for quantum cosmology is meaningful and, with...
Quite generally, the Lorentzian path integral is well-defined but it's well-defined only when we properly define it, and to do so, we generally have to use a Euclidean continuation. In other words, the Lorentzian path integral may be well-defined at the end but the Euclidean one is more "immediately" well-defined. The number of operations and correct assumptions you need in the Euclidean path integral is smaller. If you wish: the Wick rotation is almost universally a good idea. There are lots of examples in which the Euclideanized structures in the path integral allow you to quantify the terms more reliably. One example are the genus \(g\) Riemann surfaces representing the world sheets' history in string theory – we assume that they are Euclidean and the work with the Lorentzian surfaces would create lots of new problems and puzzles.
The sentence quoted above sounds like they are saying that the "Lorentzian path integral is more well-defined than the Euclidean one" which is just wrong. This general sentence is a preparation for the fact that they would be making wrong contour and sign choices that would lead to wrong results – not the correct ones that are most naturally obtained by a continuation to the Euclidean signature.
Fine. So I believe that Turok et al. are just wrong and I am worried by the suggestion that he is abusing his power. I am worried that the likes of Tetragraviton are licking the director's rectum because it might be a good idea for them personally. More generally, it's bad for an institute of this singular character to have a director who isn't quite a top physicist but who tries to fight against top physicists – and the most important paradigms in physics. It looks like a classic example of the abuse of power. The directors should either be top physicists themselves, or someone else who has a lot of respect for top physicists. Someone's efforts to increase his influence within science by mostly political means is wrong, wrong, wrong.
No comments:
Post a Comment |
884225bda34b023d | Dismiss Notice
Join Physics Forums Today!
Why conductors bonds in such way that leaves the valence band not full
1. May 28, 2014 #1
Iam trying to understand the differences between metals, semiconductors and insulators. Regarding the conductivity properties. Iam new to this area so please correct me if Im wrong.
I may be simplifying things now;
1) If I put a voltage over a solid, I only measure a current if there are empty energy states for the electrons to occupy with their available energy (thermal or whatever).
2) The reason why metals conduct electricity so good is because there are empty (higher) energy states. And the reason for that, is because when these atoms bind into a solid, they bond in such way that the electron configuration leaves some available states in the sub shells, e.g. s- or p-sub shells are not full.
3) Semiconductors and insulators bond in such way that all states are full. However, the differences between semi and insulators is the band gap, and Iam not really sure what determines this, the magnitude of the bandgap. Maybe something with cosinus function, K values and Schrödinger equation?
If Im right in this (2), why (or what makes them not to) does metal not bond in covalent bonds in such way that all states are full, like in insulators.
Thank you very much for your time!
2. jcsd
3. May 29, 2014 #2
You're making a valiant effort to fuse a number of separate phenomena (electronic structure Fermi energy/level, their bonding together with electrical conductivity) within a unified coherent model. While this is a correct approach that maybe solvable with advanced computational techniques that are beyond me, I believe the introductory models split these up into separate phenomena (see textbook by ashscroft and mermin, and kittel)
1) electronic structure / band theory
2) electrical conductivity
There are many ways to get from 1 -> 2. (e.g. Drude model using 1 for collision time, semi-classical, boltzman diffusion, and phonon scattering)
I'll try and directly address some of your questions
if a band is empty, there are no electrons to carry current when voltage is applied.
if a band is full the electron is stuck cause it has no where to move to (see mott type insulators for a surprising application of this which will confuse you more).
yes the difference between insulators and semiconductors is the size of the band gap. I believe the threshold is around ~2eV, but don't quote me. Yes, it is arbitrary. Ab init calculations of the band gap is one of the unsolved problems in solid state, at least according to Wikipedia last I checked.
your (2) is correct
Your final question. The filling depends on the number of electrons available. Atomic transition metals have roughly half filled 3d states, so their condensed counterpart have the possibility to be half filled.
Atomic Si, Ge, can completely fill their spd shells, so their condensed counter part can be completely filled.
4. May 30, 2014 #3
User Avatar
Science Advisor
Metals differ from non-metals in their electronegativity being low (the boundary being somewhere near 2.5).
That means that in non-metals, bonds are covalent, i.e. there is very little ionic character. On the other hand, in metals ionic structures are of large importance, e.g. something like ##\rm Na^+ Na^-\leftrightarrow Na^- Na^+## in addition to covalent contributions. As Coulombic interactions are not directed, a metal can have bonding interactions with much more neighbours than a non-metal.
5. Jun 15, 2014 #4
The reason metals conduct electricity so well is because of the accessibility of their higher energy states. The band gap is defined as the energy required to liberate a bound valence electron into a delocalized, unbound "fluid." On average, metals are easier to oxidize than non-metals. In other words, they are more willing to allow for one of their electrons to become unbound. As electrons are promoted into the conduction band, they access this sort of pseudo-oxidation state. The sea is made up of charge carriers, so the oxidation state is still considered 0. However, these conduction electrons are no longer bound.
To answer your other question, band gap can be determined with UV/Vis spectroscopy. This would be a good page if you want to read more: http://science.unitn.it/~semicon/members/pavesi/CaseStudy_uv81.pdf
I've heard that a good line to draw it at is 3eV. Although, it is arbitrary, like you said :wink:. I guess it depends on the material's behavior during its usage. I would assume that temperature dependance plays into it.
Last edited: Jun 15, 2014
Similar Discussions: Why conductors bonds in such way that leaves the valence band not full |
01907b3e23589c0a | måndag 31 mars 2014
Planck's Constant = Human Convention Standard Frequency vs Electronvolt
The recent posts on the photoelectric effect exhibits Planck's constant $h$ as a conversion standard between the units of light frequency $\nu$ in $Hz\, = 1/s$ as periods per second and electronvolt ($eV$), expressed in Einstein's law of photoelectricity:
• $h\times (\nu -\nu_0) = eU$,
where $\nu_0$ is smallest frequency producing a photoelectric current, $e$ is the charge of an electron and $U$ the stopping potential in Volts $V$ for which the current is brought to zero for $\nu > \nu_0$. Einstein obtained, referring to Lenard's 1902 experiment with $\nu -\nu_0 = 1.03\times 10^{15}\, Hz$ corresponding to the ultraviolet limit of the solar spectrum and $U = 4.3\, V$
• $h = 4.17\times 10^{-15} eVs$
to be compared with the reference value $4.135667516(91)\times 10^{-15}\, eV$ used in Planck's radiation law. We see that here $h$ occurs as a conversion standard between Hertz $Hz$ and electronvolt $eV$ with
• $1\, Hz = 4.17\times 10^{-15}\, eV$
To connect to quantum mechanics, we recall that Schrödinger's equation is normalized with $h$ so that the first ionization energy of Hydrogen at frequency $\nu = 3.3\times 10^{15}\, Hz$ equals $13.6\, eV$, to be compared with $3.3\times 4.17 = 13.76\, eV$ corresponding to Lenard's photoelectric experiment.
We understand that Planck's constant $h$ can be seen as a conversion standard between light energy measured by frequency and electron energy measured in electronvolts. The value of $h$ can then be determined by photoelectricity and thereafter calibrated into Schrödinger's equation to fit with ionization energies as well as into Planck's law as a parameter in the high-frequency cut-off (without a very precise value). The universal character of $h$ as a smallest unit of action is then revealed to simply be a human convention standard without physical meaning. What a disappointment!
• Planck's constant was introduced as a fundamental scale in the early history of quantum mechanics. We find a modern approach where Planck's constant is absent: it is unobservable except as a constant of human convention.
Finally: It is natural to view frequency $\nu$ as a measure of energy per wavelength, since radiance as energy per unit of time scales with $\nu\times\nu$ in accordance with Planck's law, which can be viewed as $\nu$ wavelengths each of energy $\nu$ passing a specific location per unit of time. We thus expect to find a linear relation between frequency and electronvolt as two energy scales: If 1 € (Euro) is equal to 9 Skr (Swedish Crowns), then 10 € is equal to 90 Skr.
söndag 30 mars 2014
Photoelectricity: Millikan vs Einstein
The American physicist Robert Millikan received the Nobel Prize in 1923 for (i) experimental determination of the charge $e$ of an electron and (ii) experimental verification of Einstein's law of photoelectricity awarded the 1921 Prize.
Millikan started out his experiments on photoelectricity with the objective of disproving Einstein's law and in particular the underlying idea of light quanta. To his disappointment Millikan found that according to his experiments Einstein's law in fact was valid, but he resisted by questioning the conception of light-quanta even in his Nobel lecture:
• In view of all these methods and experiments the general validity of Einstein’s equation is, I think, now universally conceded, and to that extent the reality of Einstein’s light-quanta may be considered as experimentally established.
• But the conception of localized light-quanta out of which Einstein got his equation must still be regarded as far from being established.
• Whether the mechanism of interaction between ether waves and electrons has its seat in the unknown conditions and laws existing within the atom, or is to be looked for primarily in the essentially corpuscular Thomson-Planck-Einstein conception as to the nature of radiant energy is the all-absorbing uncertainty upon the frontiers of modern Physics.
Millikan's experiments consisted in subjecting a metallic surface to light of different frequencies $\nu$ and measuring the resulting photovoltic current determining a smallest frequency $\nu_0$ producing a current and (negative) stopping potential required to bring the current to zero for frequencies $\nu >\nu_0$. Millikan thus measured $\nu_0$ and $V$ for different frequencies $\nu > \nu_0$ and found a linear relationship between $\nu -\nu_0$ and $V$, which he expressed as
• $\frac{h}{e}(\nu -\nu_0)= V$,
in terms of the charge $e$ of an electron which he had already determined experimentally, and the constant $h$ which he determined to have the value $6.57\times 10^{-34}$. The observed linear relation between $\nu -\nu_0$ and $V$ could then be expressed as
• $h\nu = h\nu_0 +eV$
which Millikan had to admit was nothing but Einstein's law with $h$ representing Planck's constant.
But Millikan could argue that, after all, the only thing he had done was to establish a macroscopic linear relationship between $\nu -\nu_0$ and $V$, which in itself did not give undeniable evidence of the existence of microscopic light-quanta. What Millikan did was to measure the current for different potentials of the plus pole receiving the emitted electrons under different exposure to light and thereby discovered a linear relationship between frequency $\nu -\nu_0$ and stopping potential $V$ independent of the intensity of the light and properties of the metallic surface.
By focussing on frequency and stopping potential Millikan could make his experiment independent of the intensity of incoming light and of the metallic surface, and thus capture a conversion between light energy and electron energy of general significance.
But why then should stopping potential $V$ scale with frequency $\nu - \nu_0$, or $eV$ scale with frequency $h(\nu - \nu_0)$? Based on the analysis on Computational Blackbody Radiation the answer would be that $h\nu$ represents a threshold energy for emission of radiation in Planck's radiation law and $eV$ represents a threshold energy for emission of electrons, none of which would demand light quanta.
lördag 29 mars 2014
Einstein: Genius by Definition of Law of Photoelectricity
• $h\nu = h\nu_0 + eV$
• It is the theory which decides what we can observe.
torsdag 27 mars 2014
How to Make Schrödinger's Equation Physically Meaningful + Computable
The derivation of Schrödinger's equation as the basic mathematical model of quantum mechanics is hidden in mystery: The idea is somehow to start considering a classical Hamiltonian $H(q,p)$ as the total energy equal to the sum of kinetic and potential energy:
• $H(q,p)=\frac{p^2}{2m} + V(q)$,
where $q(t)$ is position and $p=m\dot q= m\frac{dq}{dt}$ momentum of a moving particle of mass $m$, and make the formal ad hoc substitution with $\bar h =\frac{h}{2\pi}$ and $h$ Planck's constant:
• $p = -i\bar h\nabla$ with formally $\frac{p^2}{2m} = - \frac{\bar h^2}{2m}\nabla^2 = - \frac{\bar h^2} {2m}\Delta$,
to get Schrödinger's equation in time dependent form
• $ i\bar h\frac{\partial\psi}{\partial t}=H\psi $,
with now $H$ a differential operator acting on a wave function $\psi (x,t)$ with $x$ a space coordinate and $t$ time, given by
• $H\psi \equiv -\frac{\bar h^2}{2m}\Delta \psi + V\psi$,
where now $V(x)$ acts as a given potential function. As a time independent eigenvalue problem Schrödinger's equation then takes the form:
• $-\frac{\bar h^2}{2m}\Delta \psi + V\psi = E\psi$,
with $E$ an eigenvalue, as a stationary value for the total energy
• $K(\psi ) + W(\psi )\equiv\frac{\bar h^2}{2m}\int\vert\nabla\psi\vert^2\, dx +\int V\psi^2\, dx$,
as the sum of kinetic energy $K(\psi )$ and potential energy $W(\psi )$, under the normalization $\int\psi^2\, dx = 1$. The ground state then corresponds to minimal total energy,
We see that the total energy $K(\psi ) + W(\psi)$ can be seen as smoothed version of $H(q,p)$ with
• $V(q)$ replaced by $\int V\psi^2\, dx$,
• $\frac{p^2}{2m}=\frac{m\dot q^2}{2}$ replaced by $\frac{\bar h^2}{2m}\int\vert\nabla\psi\vert^2\, dx$,
and Schrödinger's equation as expressing stationarity of the total energy as an analog the classical equations of motion expressing stationarity of the Hamiltonian $H(p,q)$ under variations of the path $q(t)$.
We conclude that Schrödinger's equation for a one electron system can be seen as a smoothed version of the equation of motion for a classical particle acted upon by a potential force, with Planck's constant serving as a smoothing parameter.
Similarly it is natural to consider smoothed versions of classical many-particle systems as quantum mechanical models resembling Hartree variants of Schrödinger's equation for many-electrons systems, that is quantum mechanics as smoothed particle mechanics, thereby (maybe) reducing some of the mystery of Schrödinger's equation and opening to computable quantum mechanical models.
We see Schrödinger's equation arising from a Hamiltonian as total energy kinetic energy + potential energy, rather that from a Lagrangian as kinetic energy - potential energy. The reason is a confusing terminology with $K(\psi )$ named kinetic energy even though it does not involve time differentiation, while it more naturally should occur in a Lagrangian as a form of the potential energy like elastic energy in classical mechanics.
onsdag 26 mars 2014
New Paradigm of Computational Quantum Mechanics vs ESS
ESS as European Spallation Source is a €3 billion projected research facility captured by clever Swedish politicians to be allocated to the plains outside the old university town Lund in Southern Sweden with start in 2025: Neutrons are excellent for probing materials on the molecular level – everything from motors and medicine, to plastics and proteins. ESS will provide around 30 times brighter neuutron beams than existing facilities today. The difference between the current neutron sources and ESS is something like the difference between taking a picture in the glow of a candle, or doing it under flash lighting.
Quantum mechanics was invented in the 1920s under limits of pen and paper computation but allowing limitless theory thriving in Hilbert spaces populated by multidimensional wave functions described by fancy symbols on paper. Lofty theory and sparse computation was compensated by inflating the observer role of the physicist to a view that only physics observed by a physicist was real physics, with extra support from a conviction that the life or death of Schrödinger's cat depended more on the observer than on the cat and that supercolliders are very expensive. The net result was (i) uncomputable limitless theory combined with (ii) unobservable practice as the essence of the Copenhagen Interpretation filling text books.
Today the computer opens to a change from impossibility to possibility, but this requires a fundamental change of the mathematical models from uncomputable to computable non-linear systems of 3d of Hartree-Schrödinger equations (HSE) or Density Functional Theory (DFT). This brings theory and computation together into a new paradigm of Computational Quantum Mechanics CQM shortly summarized as follows:
1. Experimental inspection of microscopic physics difficult/impossible.
2. HSE-DFT for many-particle systems are solvable computationally.
3. HSE-DFT simulation allows detailed inspection of microscopics.
4. Assessment of HSE simulations can be made by comparing macroscopic outputs with observation.
The linear multidimensional Schrödinger equation has no meaning in CQM and a new foundation is asking to be developed. The role of observation in the Copenhagen Interpretation is taken over by computation in CQM: Only computable physics is real physics, at least if physics is a form of analog computation, which may well be the case. The big difference is that anything computed can be inspected and observed, which opens to non-destructive testing with only limits set by computational power.
The Large Hadron Collider (LHC) and the projected neutron collider European Spallation Source (ESS) in Lund in Sweden represent the old paradigm of smashing to pieces the fragile structure under investigation and as such may well be doomed.
tisdag 25 mars 2014
Fluid Turbulence vs Quantum Electrodynamics
Horace Lamb (1849 - 1934) author of the classic text HydrodynamicsIt is asserted that the velocity of a body not acted on by any force will be constant in magnitude and direction, whereas the only means of ascertaining whether a body is, or is not, free from the action of force is by observing whether its velocity is constant.
There is famous quote by the British applied mathematician Horace Lamb summarizing the state of classical fluid mechanics and the new quantum mechanics in 1932 as follows:
Concerning the turbulent motion of fluids I am happy to report that this matter is now largely resolved by computation, as made clear in the article New Theory of Flight soon to be delivered for publication in Journal of Mathematical Fluid Mechanics, with lots of supplementary material on The Secret of Flight. This gives good hope that the other problem of quantum electrodynamics can likewise be unlocked by viewing The World as Computation:
• In a time of turbulence and change, it is more true than ever that knowledge is power. (JFK)
Quantum Physics as Digital Continuum Physics
Quantum mechanics was born in 1900 in Planck's theoretical derivation of a modification of Rayleigh-Jeans law of blackbody radiation based on statistics of discrete "quanta of energy" of size $h\nu$, where $\nu$ is frequency and $h =6.626\times 10^{-34}\, Js$ is Planck's constant.
This was the result of a long fruitless struggle to explain the observed spectrum of radiating bodies using deterministic eletromagnetic wave theory, which ended in Planck's complete surrender to statistics as the only way he could see to avoid the "ultraviolet catastrophe" of infinite radiation energies, in a return to the safe haven of his dissertation work in 1889-90 based on Boltzmann's statistical theory of heat.
Planck described the critical step in his analysis of a radiating blackbody as a discrete collection of resonators as follows:
• We must now give the distribution of the energy over the separate resonators of each frequency, first of all the distribution of the energy $E$ over the $N$ resonators of frequency . If E is considered to be a continuously divisible quantity, this distribution is possible in infinitely many ways.
• We consider, however this is the most essential point of the whole calculation $E$ to be composed of a well-defined number of equal parts and use thereto the constant of nature $h = 6.55\times 10^{-27}\, erg sec$. This constant multiplied by the common frequency of the resonators gives us the energy element in $erg$, and dividing $E$ by we get the number $P$ of energy elements which must be divided over the $N$ resonators.
• If the ratio thus calculated is not an integer, we take for $P$ an integer in the neighbourhood. It is clear that the distribution of P energy elements over $N$ resonators can only take place in a finite, well-defined number of ways.
We here see Planck introducing a constant of nature $h$, later referred to as Planck's constant, with a corresponding smallest quanta of energy $h\nu$ for radiation (light) of frequency $\nu$.
Then Einstein entered in 1905 with a law of photoelectricity with $h\nu$ viewed as the energy of a light quanta of frequency $\nu$ later named photon and crowned as an elementary particle.
Finally, in 1926 Schrödinger formulated a wave equation for involving a formal momentum operator $-ih\nabla$ including Planck's constant $h$, as the birth of quantum mechanics, as the incarnation of modern physics based on postulating that microscopic physics is
1. "quantized" with smallest quanta of energy $h\nu$,
2. indeterministic with discrete quantum jumps obeying laws of statistics.
However, microscopics based on statistics is contradictory, since it requires microscopics of microscopics in an endeless regression, which has led modern physics into an impasse of ever increasing irrationality into manyworlds and string theory as expressions of scientific regression to microscopics of microscopics. The idea of "quantization" of the microscopic world goes back to the atomism of Democritus, a primitive scientific idea rejected already by Aristotle arguing for the continuum, which however combined with modern statistics has ruined physics.
But there is another way of avoiding the ultraviolet catastrophe without statistics, which is presented on Computational Blackbody Radiation with physics viewed as analog finite precision computation which can be modeled as digital computational simulation
This is physics governed by deterministic wave equations with solutions evolving in analog computational processes, which can be simulated digitally. This is physics without microscopic games of roulette as rational deterministic classical physics subject only to natural limitations of finite precision computation.
This opens to a view of quantum physics as digital continuum physics which can bring rationality back to physics. It opens to explore an analog physical atomistic world as a digital simulated world where the digital simulation reconnects to analog microelectronics. It opens to explore physics by exploring the digital model, readily available for inspection and analysis in contrast to analog physics hidden to inspection.
The microprocessor world is "quantized" into discrete processing units but it is a deterministic world with digital output:
måndag 24 mars 2014
Hollywood vs Principle of Least Action
The fictional character of the Principle of Least Action viewed to serve a fundamental role in physics, can be understood by comparing with making movies:
The dimension of action as energy x time comes out very naturally in movie making as actor energy x length of the scene. However, outside Hollywood a quantity of dimension energy x time is questionable from physical point of view, since there seems to be no natural movie camera which can record and store such a quantity.
söndag 23 mars 2014
Why the Same Universal Quantum of Action $h$ in Radiation, Photoelectricity and Quantum Mechanics?
Planck's constant $h$ as The Universal Quantum of Action was introduced by Planck in 1900 as a mathematical statistical trick to supply the classical Rayleigh-Jeans radiation law $I(\nu ,T)=\gamma T\nu^2$ with a high-frequency cut-off factor $\theta (\nu ,T)$ to make it fit with observations including Wien's displacement law, where
• $\theta (\nu ,T) =\frac{\alpha}{\exp(\alpha )-1}$,
• $\alpha =\frac{h\nu}{kT}$,
$\nu$ is frequency, $T$ temperature in Kelvin $K$ and $k =1.38066\times 10^{-23}\, J/K$ is Boltzmann's constant and $\gamma =\frac{2k}{c}$ with $c\, m/s$ the speed of light in vaccum. Planck then determined $h$ from experimental radiation spectra to have a value of $6.55\times 10^{-34} Js$, as well as Boltzmann's constant to be $1.346\times 10^{-23}\, J/K$ with $\frac{h}{k}= 4.87\times 10^{-11}\, Ks$ as the effective parameter in the cut-off.
Planck viewed $h$ as a fictional mathematical quantity without real physical meaning, with $h\nu$ a fictional mathematical quantity as a smallest packet of energy of a wave of frequency $\nu$, but in 1905 the young ambitious Einstein suggested an energy balance for photoelectricity of the form
• $h\nu = W + E$,
with $W$ the energy required to release one electron from a metallic surface and E the energy of a released electron with $h\nu$ interpreted as the energy of a light photon of frequency $\nu$ as a discrete lump of energy. Since the left hand side $h\nu$ in this law of photoelectricity was determined by the value of $h$ in Planck's radiation law, a new energy measure for electrons of electronvolt was defined by the relation $W + E =h\nu$. As if by magic the same Universal Quantum of Action $h$ then appeared to serve a fundamental role in both radiation and photoelectricity.
What a wonderful magical coincidence that the energy of a light photon of frequency $\nu$ showed to be exactly $h\nu \, Joule$! In one shot Planck's fictional smallest quanta of energy $h\nu$ in the hands of the young ambitious Einstein had been turned into a reality as the energy of a light photon of frequency $h\nu$, and of course because a photon carries a definite packet of energy a photon must be real. Voila!
In 1926 Planck's constant $h$ showed up again in a new context, now in Schrödinger's equation
• $-\frac{\bar h^2}{2m}\Delta\psi = E\psi$
with the formal connection
• $p = -i\bar h \nabla$ with $\bar h =\frac{h}{2\pi}$,
• $\frac{\vert p\vert^2}{2m} = E$,
as a formal analog of the classical expression of kinetic energy $\frac{\vert p\vert ^2}{2m}$ with $p=mv$ momentum, $m$ mass and $v$ velocity.
Planck's constant $h$ originally determined to make theory fit with observations of radiation spectra and then by Planck in 1900 canonized as The Universal Quantum of Action, thus in 1905 served to attribute the energy $h\nu$ to the new fictional formal quantity of a photon of frequency $\nu$ . In 1926 a similar formal connection was made in the formulation of Schrödinger's wave equation.
The result is that the same Universal Quantum of Action $h$ by all modern physicists is claimed to play a fundamental role in both (i) radiation, (ii) photolelectricity and (iii) quantum mechanics of the atom. This is taken as an expression of a deep mystical one-ness of physics which only physicists can grasp, while it in fact it is a play with definitions without mystery, where $h$ appears as a parameter in a high-frequency cut-off factor in Planck's Law, or rather in the combination $\hat h =\frac{h}{k}$, and then is transferred into (ii) and (iii) by definition. Universality can this way be created by human hands by definition. The power of thinking has no limitations, or cut-off.
No wonder that Schrödinger had lifelong interest in the Vedanta philosophy of Hinduism "played out on one universal consciousness".
But Einstein's invention of the photon as light quanta in 1905 haunted him through life and approaching the end in 1954, he confessed:
• All these fifty years of conscious brooding have brought me no nearer to the answer to the question, "What are light quanta?" Nowadays every Tom, Dick and Harry thinks he knows it, but he is mistaken.
Real physics always shows up to be more interesting than fictional physics, cf. Dr Faustus ofd Modern Physics.
PS Planck's constant $h$ is usually measured by (ii) and is then transferred to (i) and (iii) by ad hoc definition.
The Torturer's Dilemma vs Uncertainty Principle vs Computational Simulation
lördag 22 mars 2014
The True Meaning of Planck's Constant as Measure of Wavelength of Maximal Radiance and Small-Wavelength Cut-off.
The modern physics of quantum mechanics was born in 1900 when Max Planck after many unsuccessful attempts in an "act of despair" introduced a universal smallest quantum of action $h= 6.626\times 10^{-34}\, Js = 4.12\times 10^{-15}\, eVs$ named Planck's constant in a theoretical justification of the spectrum of radiating bodies observed in experiments, based on statistics of packets of energy of size $h\nu$ with $\nu$ frequency.
Planck describes this monumental moment in the history of science in his 1918 Nobel Lecture as follows:
Planck thus finally succeeded to prove Planck's radiation law as a modification of Rayleigh-Jeans law with a high-frequency cut-off factor eliminating "the ultraviolet catastrophe" which had paralyzed physics shortly after the introduction of Maxwell's wave equations for electromagnetics as the culmination of classical physics.
Planck's constant $h$ enters Planck's law
• $I(\nu ,T)=\gamma \theta (\nu , T)\nu^2 T$, where $\gamma =\frac{2k}{c^2}$,
where $I(\nu ,T)$ is normalized radiance, as a parameter in the multiplicative factor
• $\theta (\nu ,T)=\frac{\alpha}{e^{\alpha} -1}$,
where $\nu$ is frequency, $T$ temperature in Kelvin $K$ and $k = 1.38\times 10^{-23}\, J/K = 8.62\times 10^{-5}\, eV/K$ is Boltzmann's constant and $c\, m/s$ the speed of light.
We see that $\theta (\nu ,T)\approx 1$ for small $\alpha$ and enforces a high-frequency small-wavelength cut-off for $\alpha > 10$, that is, for
• $\nu > \nu_{max}\approx \frac{10T}{\hat h}$ where $\hat h =\frac{h}{k}=4.8\times 10^{-11}\, Ks$,
• $\lambda < \lambda_{min}\approx \frac{c}{10T}\hat h$ where $\nu\lambda =c$,
with maximal radiance occuring for $\alpha = 2.821$ in accordance with Wien's displacement law. With $T = 1000\, K$ the cut-off is in the visible range for $\nu\approx 2\times 10^{14}$ and $\lambda\approx 10^{-6}\, m$. We see that the relation
• $\frac{c}{10T}\hat h =\lambda_{min}$,
gives $\hat h$ a physical meaning as measure of wave-length of maximal radiance and small-wavelength cut-off of atomic size scaling with $\frac{c}{T}$.
Modern physicsts are trained to believe that Planck's constant $h$ as the universal quantum of action represents a smallest unit of a "quantized" world with a corresponding Planck length $l_p= 1.62\times 10^{-35}$ as a smallest unit of length, about 20 orders of magnitude smaller than the proton diameter.
We have seen that Planck's constant enters in Planck's radiation law in the form $\hat h =\frac{h}{k}$, and not as $h$, and that $\hat h$ has the role of setting a small-wavelength cut-off scaling with $\frac{c}{T}$.
Small-wavelength cut-off in the radiation from a body is possible to envision in wave mechanics as an expression of finite precision analog computation. In this perspective Planck's universal quantum of action emerges as unnecessary fiction about exceedingly small quantities beyond reason and reality.
torsdag 20 mars 2014
Principle of Least Action vs Adam Smith's Invisible Hand
Violation of the PLA of the capitalistic system in 1929.
The Principle of Least Action (PLA) expressing
• Stationarity of the Action (the integral in time of the Lagrangian),
with the Lagrangian the difference between kinetic and potential energies, is cherished by physicists as a deep truth about physics: Tell me the Lagrangian and I will tell you the physics, because a dynamical system will (by reaction to local forces) evolve so as to keep the Action stationary as if led by an invisible hand steering the system towards a final cause of least action.
PLA is similar to the invisible hand of Adam Smith supposedly steering an economy towards a final cause of maximal effectivity or least action (maximal common happiness) by asking each member of the economy to seek to maximize individual profit (individual happiness). This is the essence of the capitalistic system. The idea is that a final cause of maximal effectivity can be reached without telling the members the meaning of the whole thing, just telling each one to seek to maximize his/her own individual profit (happiness).
Today the capitalistic system is shaking and nobody knows how to steer towards a final cause of maximal efficiency. So the PLA of economy seems to be rather empty of content. It may be that similarly the PLA of physics is void of real physics. In particular, the idea of a smallest quantum of action as a basis of quantum mechanics, may well be unphysical.
Till Per-Anders Ivert Redaktör för SMS-Bulletinen
Jag har skickat följande inlägg till Svenska Matematikersamfundets medlemsblad Bulletinen med anledning av redaktör Per-Anders Iverts inledande ord i februarinummret 2014.
Till SMS-Bulletinen
Redaktör Per-Anders Ivert inleder februarinummret av Bulletinen med: "Apropå reaktioner; det kommer sällan sådana, men jag uppmärksammades på en rolig reaktion på något jag skrev för några nummer sedan och som handlade om huruvida skolmatematiken behövs. En jeppe från Chalmers, en person som jag inte känner och tror att jag aldrig varit i kontakt med, skrev på sin blogg":
• Oktobernumret av Svenska Matematikersamfundets Bulletin tar upp frågan om skolmatematiken ”behövs”.
• Ordförande Per- Anders Ivert inleder med Själv kan jag inte svara på vad som behövs och inte behövs. Det beror på vad man menar med ”behövs” och även på hur skolmatematiken ser ut.
• Ulf Persson följer upp med en betraktelse som inleds med: Det tycks vara ett faktum att en stor del av befolkningen avskyr matematik och finner skolmatematiken plågsam.
• Ivert och Persson uttrycker den vilsenhet, och därav kommande ångest, som präglar matematikerns syn på sitt ämnes roll i skolan av idag: Yrkesmatematikern vet inte om skolmatematiken längre ”behövs” och då vet inte skolmatematikern och eleven det heller.
Ivert fortsätter med:
• "När jag såg detta blev jag rätt förvånad. Jag trodde att mina citerade ord var fullkomligt okontroversiella, och jag förstod inte riktigt vad som motiverade sarkasmen ”ordförande”. Den här Chalmersliraren trodde nog inte att jag var ordförande för Samfundet, utan det ska väl föreställa någon anspelning på östasiatiska politiska strukturer".
• "Vid en närmare läsning såg jag dock att Ulf Persson hade kritiserat den här bloggaren i sin text, vilket tydligen hade lett till en mental kortslutning hos bloggaren och associationerna hade börjat gå kors och tvärs. Om man vill fundera över min ”vilsenhet och ångest” så bjuder jag på en del underlag i detta nummer".
Iverts utläggning om "jeppe på Chalmers" och "Chalmerslirare" skall ses mot bakgrund av det öppna brev till Svenska Matematikersamfundet of Nationalkommitten för Matematik, som jag publicerade på min blogg 22 dec 2013, och där jag efterfrågade vilket ansvar Samfundet och Kommitten har för matematikundervisningen i landet, inklusive skolmatematiken och det pågående Matematiklyftet.
Trots ett flertal påminnelser har jag inte fått något svar varken från Samfundet (ordf Pär Kurlberg) eller Kommitten (Torbjörn Lundh) eller KVA-Matematik (Nils Dencker), och jag ställer denna fråga än en gång nu direkt till Dig Per-Anders Ivert: Om Du och Samfundet inte har drabbats av någon "vilsenhet och ångest" så måste Du kunna ge ett svar och publicera detta tillsammans med detta mitt inlägg i nästa nummer av Bulletinen.
Med anledning av Ulf Perssons inlägg under Ordet är mitt, kan man säga att det som räknas vad gäller kunskap är skillnad i kunskap: Det alla kan har ringa intresse. En skola som främst satsar på att ge alla en gemensam baskunskap, vad den än må vara, har svårt att motivera eleverna och är förödande både de många som inte uppnår de gemensamma målen och för de något färre som skulle kunna prestera mycket bättre. Sålänge Euklidisk geometri och latin var reserverade för liten del av eleverna, kunde motivation skapas och studiemål uppnås, tämligen oberoende av intellektuell kapacitet och social bakgrund hos elever (och lärare). Matematiklyftet som skall lyfta alla, är ett tomt slag i luften till stora kostnader.
Epiteten om min person i Bulletinen har nu utvidgats från "Johnsonligan" till "jeppe på Chalmers" och "Chalmerslirare", det senare kanske inte längre så aktuellt då jag flyttade till KTH för 7 år sedan. Per-Anders ondgör sig över språklig förflackning, men där ingår uppenbarligen inte "jeppe", "lirare" och "mental kortslutning".
Claes Johnson
prof em i tillämpad matematik KTH
onsdag 19 mars 2014
Lagrange's Biggest Mistake: Least Action Principle Not Physics!
The Principle of Least Action formulated by Lagrange in his monumental treatise Mecanique Analytique (1811) collecting 50 years work, is viewed to be the crown jewel of the Calculus of Newton and Leibniz as the mathematical basis of the scientific revolution:
• The equations of motion of a dynamical system are the same equations that express that the action as the integral over time of the difference of kinetic and potential energies, is stationary that is does not change under small variations.
The basic idea goes back to Leibniz:
• In change of motion, the quantity of action takes on a Maximum or Minimum.
And to Maupertis (1746):
• Whenever any action occurs in nature, the quantity of action employed in this change is the least possible.
In mathematical terms, the Principle of Least Action expresses that the trajectory $u(t)$ followed by a dynamical system over a given time interval $I$ with time coordinate $t$, is determined by the condition of stationarity of the action:
• $\frac{d}{d\epsilon}\int_I(T(u(t)+\epsilon v(t)) - V(u(t)+\epsilon v(t)))\, dt =0$,
where $T(u(t))$ is kinetic energy and $V(u(t))$ is potential energy of $u(t)$ at time $t$, and $v(t)$ is an arbitrary perturbation of $u(t)$, combined with an initial condition. In the basic case of a harmonic oscillator;
• $T(u(t))=\frac{1}{2}\dot u^2(t)$ with $\dot u=\frac{du}{dt}$,
• $V(u(t))=\frac{1}{2}u^2(t)$
• stationarity is expressed as force balance as Newton's 2nd law: $\ddot u (t) +u(t) = 0$.
The Principle of Least Action is viewed as a constructive way of deriving the equations of motion expressing force balance according to Newton's 2nd law, in situations with specific choices of coordinates for which direct establishment of the equations is tricky.
From the success in this respect the Principle of Least Action has been elevated from mathematical trick to physical principle asking Nature to arrange itself so as to keep the action stationary, as if Nature could compare the action integral for different trajectories and choose the trajectory with least action towards a teleological final cause, while in fact Nature can only respond to forces as expressed in equations of motion.
But if Nature does not have the capability of evaluating and comparing action integrals, it can be misleading to think this way. In the worst case it leads to invention of physics without real meaning, which is acknowledged by Lagrange in the Preface to Mecanique Analytique.
The ultimate example is the very foundation of quantum physics as the pillar of modern physics based on a concept of elementary (smallest) quantum of action denoted by $h$ and named Planck's constant with dimension $force \times time$. Physicists are trained to view the elementary quantum of action to represent a "quantization" of reality expressed as follows on Wikipedia:
In the quantum world light consists of a stream of discrete light quanta named photons. Although Einstein in his 1905 article on the photoelectric effect found it useful as a heuristic idea to speak about light quanta, he later changed mind:
• The quanta really are a hopeless mess. (to Pauli)
But nobody listened to Einstein and there we are today with an elementary quantum of action which is viewed as the basis of modern physics but has not physical reality. Schrödinger supported by Einstein said:
• There are no particles or quanta. All is waves.
Connecting to the previous post, note that to compute a solution according the Principle of Least Action typically an iterative method based on relaxation of the equations of motion is used, which has a physical meaning as response to imbalance of forces. This shows the strong connection between computational mathematics as iterative time-stepping and analog physics as motion in time subject to forces, which can be seen as a mindless evolution towards a hidden final cause, as if directed by an invisible hand of a mind understanding the final cause.
Physics as Analog Computation instead of Physics as Observation
5. Meaning of Heisenberg's Uncertainity Principle.
7. Statistical interpretation of Schrödinger's multidimensional wave function.
8. Meaning of Bohr's Complementarity Principle.
9. Meaning of Least Action Principle.
5. Uncertainty Principle as effect of finite precision computation.
6. Statistics replaced by finite precision computation.
tisdag 18 mars 2014
Blackbody as Linear High Gain Amplifier
A blackbody acts as a high gain linear (black) amplifier.
The analysis on Computational Blackbody Radiation (with book) shows that a radiating body can be seen as a linear high gain amplifier with a high-frequency cut-off scaling with noise temperature, modeled by a wave equation with small damping, which after Fourier decomposition in space takes the form of a damped linear oscillator for each wave frequency $\nu$:
• $\ddot u_\nu +\nu^2u_\nu - \gamma\dddot u_\nu = f_\nu$,
where $u_\nu(t)$ is oscillator amplitude and $f_\nu (t)$ signal amplitude of wave frequency $\nu$ with $t$ time, the dot indicates differentiation with respect to $t$, and $\gamma$ is a small constant satisfying $\gamma\nu^2 << 1$ and the frequency is subject to a cut-off of the form $\nu < \frac{T_\nu}{h}$, where
• $T_\nu =\overline{\dot u_\nu^2}\equiv\int_I \dot u_\nu^2(t)\, dt$,
is the (noise) temperature of frequency of $\nu$, $I$ a unit time interval and $h$ is a constant representing a level of finite precision.
The analysis shows under an assumption of near resonance, the following basic relation in stationary state:
• $\gamma\overline{\ddot u_\nu^2} \approx \overline{f_\nu^2}$,
as a consequence of small damping guiding $u_\nu (t)$ so that $\dot u_\nu(t)$ is out of phase with $f_\nu(t)$ and thus "pumps" the system little. The result is that the signal $f_\nu (t)$ is balanced to major part by the oscillator
• $\ddot u_\nu +\nu^2u_\nu$,
and to minor part by the damping
• $ - \gamma\dddot u_\nu$,
• $\gamma^2\overline{\dddot u_\nu^2} \approx \gamma\nu^2 \gamma\overline{\ddot u_\nu^2}\approx\gamma\nu^2\overline{f_\nu^2} <<\overline{f_\nu^2}$.
This means that the blackbody can be viewed to act as an amplifier radiating the signal $f_\nu$ under the small input $-\gamma \dddot u_\nu$, thus with a high gain. The high frequency cut-off then gives a requirement on the temperature $T_\nu$, referred to as noise temperature, to achieve high gain.
Quantum Mechanics from Blackbody Radiation as "Act of Despair"
Max Planck: The whole procedure was an act of despair because a theoretical interpretation (of black-body radiation) had to be found at any price, no matter how high that might be…I was ready to sacrifice any of my previous convictions about physics..For this reason, on the very first day when I formulated this law, I began to devote myself to the task of investing it with true physical meaning.
The textbook history of modern physics tells that quantum mechanics was born from Planck's proof of the universal law of blackbody radiation based on an statistics of discrete lumps of energy or energy quanta $h\nu$, where $h$ is Planck's constant and $\nu$ frequency. The textbook definition of a blackbody is a body which absorbs all, reflects none and re-emits all of incident radiation:
• A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. (Wikipedia)
• Theoretical surface that absorbs all radiant energy that falls on it, and radiates electromagnetic energy at all frequencies, from radio waves to gamma rays, with an intensity distribution dependent on its temperature. (Merriam-Webster)
• An ideal object that is a perfect absorber of light (hence the name since it would appear completely black if it were cold), and also a perfect emitter of light. (Astro Virginia)
• A black body is a theoretical object that absorbs 100% of the radiation that hits it. Therefore it reflects no radiation and appears perfectly black. (Egglescliff)
• A hypothetic body that completely absorbs all wavelengths of thermal radiation incident on it. (Eric Weisstein's World of Physics
But there is something more to a blackbody and that is a the high frequency cut-off, expressed in Wien's displacement law, of the principal form
where $\nu$ is frequency, $T$ temperature and $\hat h$ a Planck constant, stating that only frequencies below the cut-off $\frac{T}{\hat h}$ are re-emitted. Absorbed frequencies above the cut-off will then be stored as internal energy in the body under increasing temperature,
Bodies which absorb all incident radiation made of different materials will have different high-frequency cut-off and an (ideal) blackbody should then be characterized as having maximal cut-off, that is smallest Planck constant $\hat h$, with the maximum taken over all real bodies.
A cavity with graphite walls is used as a reference blackbody defined by the following properties:
1. absorption of all incident radiation
2. maximal cut-off - smallest Planck constant $\hat h\approx 4.8\times 10^{-11}\, Ks$,
and $\hat h =\frac{h}{k}$ is Planck's constant $h$ scaled by Boltzmann's constant $k$.
Planck viewed the high frequency cut-off defined by the Planck constant $\hat h$ to be inexplicable in Maxwell's classical electromagnetic wave theory. In an "act of despair" to save physics from collapse in an "ultraviolet catastrophe", a role which Planck had taken on, Planck then resorted to statistics of discrete energy quanta $h\nu$, which in the 1920s resurfaced as a basic element of quantum mechanics.
But a high frequency cut-off in wave mechanics is not inexplicable, but is a well known phenomenon in all forms of waves including elastic, acoustic and electromagnetic waves, and can be modeled as a disspiative loss effect, where high frequency wave motion is broken down into chaotic motion stored as internal heat energy. For details, see Computational Blackbody Radiation.
It is a mystery why this was not understood by Planck. Science created in an "act of despair" runs the risk of being irrational and flat wrong, and that is if anything the trademark of quantum mechanics based on discrete quanta.
Quantum mechanics as deterministic wave mechanics may be rational and understandable. Quantum mechanics as statistics of quanta is irrational and confusing. All the troubles and mysteries of quantum mechanics emanate from the idea of discrete quanta. Schrödinger had the solution:
• I insist upon the view that all is waves.
But Schrödinger was overpowered by Bohr and Heisenberg, who have twisted the brains of modern physicists with devastating consequences...
måndag 17 mars 2014
Unphysical Combination of Complementary Experiments
Let us take a look at how Bohr in his famous 1927 Como Lecture describes complementarity as a fundamental aspect of Bohr's Copenhagen Interpretation still dominating textbook presentations of quantum mechanics:
• The quantum theory is characterised by the acknowledgment of a fundamental limitation in the classical physical ideas when applied to atomic phenomena. The situation thus created is of a peculiar nature, since our interpretation of the experimental material rests essentially upon the classical concepts.
• Notwithstanding the difficulties which hence are involved in the formulation of the quantum theory, it seems, as we shall see, that its essence may be expressed in the so-called quantum postulate, which attributes to any atomic process an essential discontinuity, or rather individuality, completely foreign to the classical theories and symbolised by Planck's quantum of action.
OK, we learn that quantum theory is based on a quantum postulate about an essential discontinuity symbolised as Planck's constant $h=6.626\times 10^{-34}\, Js$ as a quantum of action. Next we read about necessary interaction between the phenomena under observation and the observer:
• Accordingly, an independent reality in the ordinary physical sense can neither be ascribed to the phenomena nor to the agencies of observation.
• The circumstance, however, that in interpreting observations use has always to be made of theoretical notions, entails that for every particular case it is a question of convenience at what point the concept of observation involving the quantum postulate with its inherent 'irrationality' is brought in.
Next, Bohr emphasizes the contrast between the quantum of action and classical concepts:
• The fundamental contrast between the quantum of action and the classical concepts is immediately apparent from the simple formulas which form the common foundation of the theory of light quanta and of the wave theory of material particles. If Planck's constant be denoted by $h$, as is well known: $E\tau = I \lambda = h$where $E$ and $I$ are energy and momentum respectively, $\tau$ and $\lambda$ the corresponding period of vibration and wave-length.
• In these formulae the two notions of light and also of matter enter in sharp contrast.
• While energy and momentum are associated with the concept of particles, and hence may be characterised according to the classical point of view by definite space-time co-ordinates, the period of vibration and wave-length refer to a plane harmonic wave train of unlimited extent in space and time.
• Just this situation brings out most strikingly the complementary character of the description of atomic phenomena which appears as an inevitable consequence of the contrast between the quantum postulate and the distinction between object and agency of measurement, inherent in our very idea of observation.
Bohr clearly brings out the unphysical aspects of the basic action formula
• $E\tau = I \lambda = h$,
where energy $E$ and momentum $I$ related to particle are combined with period $\tau$ and wave-length $\lambda$ related to wave.
Bohr then seeks to resolve the contradiction by naming it complementarity as an effect of interaction between instrument and object:
• In quantum mechanics, however, evidence about atomic objects obtained by different experimental arrangements exhibits a novel kind of complementary relationship.
• … the notion of complementarity simply characterizes the answers we can receive by such inquiry, whenever the interaction between the measuring instruments and the objects form an integral part of the phenomena.
Bohr's complementarity principle has been questioned by many over the years:
• Bohr’s interpretation of quantum mechanics has been criticized as incoherent and opportunistic, and based on doubtful philosophical premises. (Simon Saunders)
• Despite the expenditure of much effort, I have been unable to obtain a clear understanding of Bohr’s principle of complementarity (Einstein).
Of course an object may have complementary qualities such as e.g. color and weight, which can be measured in different experiments, but it is meaningless to form a new concept as color times weight or colorweight and then desperately seek to give it a meaning.
In the New View presented on Computational Blackbody Radiation the concept of action as e.g position times velocity has a meaning in a threshold condition for dissipation, but is not a measure of a quantity which is carried by a physical object such as mass and energy.
The ruling Copenhagen interpretation was developed by Bohr contributing a complementarity principle and Heisenberg contributing a related uncertainty principle based position times momentum (or velocity) as Bohr's unphysical complementary combination. The uncertainty principle is often expressed as a lower bound on the product of weighted norms of a function and its Fourier transform, and then interpreted as combat between localization in space and frequency or between particle and wave. In this form of the uncertainty principle the unphysical aspect of a product of position and frequency is hidden by mathematics.
The Copenhagen Interpretation was completed by Born's suggestion to view (the square of the modulus of) Schrödinger's wave function as a probability distribution for particle configuration, which in the absence of something better became the accepted way to handle the apparent wave-particle contradiction, by viewing it as a combination of probability wave with particle distribution.
New Uncertainty Principle as Wien's Displacement Law
The recent series of posts based on Computational Blackbody Radiation suggest that Heisenberg's Uncertainty Principle can be understood as a consequence of Wien's Displacement Law expressing high-frequency cut-off in blackbody radiation scaling with temperature according to Planck's radiation law:
• $B_\nu (T)=\gamma\nu^2T\times \theta(\nu ,T)$,
where $B_\nu (T)$ is radiated energy per unit frequency, surface area, viewing angle and second, $\gamma =\frac{2k}{c^2}$ where $k = 1.3806488\times 10^{-23} m^2 kg/s^2 K$ is Boltzmann's constant and $c$ the speed of light in $m/s$, $T$ is temperature in Kelvin $K$,
• $\theta (\nu ,T)=\frac{\alpha}{e^\alpha -1}$,
where $\theta (\nu ,T)\approx 1$ for $\alpha < 1$ and $\theta (\nu ,T)\approx 0$ for $\alpha > 10$ as high frequency cut-off with $h=6.626\times 10^{-34}\, Js$ Planck's constant. More precisely, maximal radiance for a given temperature occurs $T$ for $\alpha \approx 2.821$ with corresponding frequency
• $\nu_{max} = 2.821\frac{T}{\hat h}$ where $\hat h=\frac{h}{k}=4.8\times 10^{-11}\, Ks$,
with a rapid drop for $\nu >\nu_{max}$.
The proof of Planck's Law in Computational Blackbody Radiation explains the high frequency cut-off as a consequence of finite precision computation introducing a dissipative effect damping high-frequencies.
A connection to Heisenbergs Uncertainty Principle can be made by noting that a high-frequency cut-off condition of the form
can be rephrased in the following form connecting to Heisenberg's Uncertainty Principle:
• $u_\nu\dot u_\nu > \hat h$ (New Uncertainty Principle)
where $u_\nu$ is position amplitude, $\dot u_\nu =\nu u_\nu$ is velocity amplitude of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$.
The New Uncertainty Principle expresses that observation/detection of a wave, that is observation/detection of amplitude $u$ and frequency $\nu =\frac{\dot u}{u}$ of a wave, requires
• $u\dot u>\hat h$.
The New Uncertainty Principle concerns observation/detection amplitude and frequency as physical aspects of wave motion, and not as Heisenberg's Uncertainty Principle particle position and wave frequency as unphysical complementary aspects.
söndag 16 mars 2014
Uncertainty Principle, Whispering and Looking at a Faint Star
The recent series of posts on Heisenberg's Uncertainty Principle based on Computational Blackbody Radiation suggests the following alternative equivalent formulations of the principle:
1. $\nu < \frac{T}{\hat h}$,
2. $u_\nu\dot u_\nu > \hat h$,
where $u_\nu$ is position amplitude, $\dot u_\nu =\nu u_\nu$ is velocity amplitude of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$, and $\hat h =4.8\times 10^{-11}Ks$ is Planck's constant scaled with Boltzmann's constant.
Here, 1 represents Wien's displacement law stating that the radiation from a body is subject to a frequency limit scaling with temperature $T$ with the factor $\frac{1}{\hat h}$.
2 is superficially similar to Heisenberg's Uncertainty Principle as an expression of the following physics: In order to detect a wave of amplitude $u$, it is necessary that the frequency $\nu$ of the wave satisfies $\nu u^2>h$. In particular, if the amplitude $u$ is small, then the frequency $\nu$ must be large.
This connects to (i) communication by whispering and (ii) viewing a distant star, both being based on the possibility of detecting small amplitude high-frequency waves.
The standard presentation of Heisenberg's Uncertainty Principle is loaded with contradictions:
• But what is the exact meaning of this principle, and indeed, is it really a principle of quantum mechanics? And, in particular, what does it mean to say that a quantity is determined only up to some uncertainty?
In other words, today there is no consensus on the meaning of Heisenberg's Uncertainty principle. The reason may be that it has no meaning, but that there is an alternative which is meaningful.
Notice in particular that the product of two complementary or conjugate variables such as position and momentum is questionable if viewed as representing a physical quantity, while as threshold it can make sense.
fredag 14 mars 2014
DN Debatt: Offentlighetsprincipen Vittrar Bort genom Plattläggningsparagrafer
Nils Funcke konstaterar på DN Debatt under Offentlighetsprincipen är på väg att vittra bort:
• Den svenska offentlighetsprincipen nöts sakta men säkert ned.
• ..rena plattläggningsparagrafer accepteras…
• Vid EU inträdet 1995 avgav Sverige en deklaration: Offentlighetsprincipen, särskilt rätten att ta del av allmänna handlingar, och grundlagsskyddet för meddelarfriheten, är och förblir grundläggande principer som utgör en del av Sveriges konstitutionella, politiska och kulturella arv.
Ett exempel på plattläggningsparagraf är Högsta Förvaltningdomsstolens nya prejudikat:
• För att en handling skall vara färdigställd och därmed vara upprättad och därmed vara allmän handling, krävs att någon åtgärd vidtas som visar att handlingen är färdigställd.
Med denna nya lagparagraf lägger HFD medborgaren platt på marken under myndigheten som nu själv kan bestämma om och när den åtgärd som enligt myndigheten krävs för färdigställande har vidtagits av myndigheten, eller ej.
torsdag 13 mars 2014
Against Measurement Against Copenhagen: For Rationality and Reality by Computation
John Bell's Against Measurement is a direct attack onto the heart of quantum mechanics as expressed in the Copenhagen Interpretation according to Bohr:
Bell poses the following questions:
• What exactly qualifies some physical systems to play the role of "measurer"?
• Or did it have to wait a little longer, for some better qualified system…with a Ph D?
Physicists of today have no answers, with far-reaching consequences for all of science: If there is no rationality and reality in physics as the most rational and real of all sciences, then there can be no rationality and reality anywhere…If real physics is not about what is, then real physics is irrational and irreal…and then…any bubble can inflate to any size...
The story is well described by 1969 Nobel Laureate Murray Gell-Mann:
• Niels Bohr brainwashed a whole generation of theorists into thinking that the job of interpreting quantum theory was done 50 years ago.
But there is hope today, in digital simulation which offers observation without interference. Solving Schrödinger's equation by computation gives information about physical states without touching the physics. It opens a road to bring physics back to the rationality of 19th century physics in the quantum nano-world of today…without quantum computing...
Increasing Uncertainty about Heisenberg's Uncertainty Principle + Resolution
My mind was formed by studying philosophy, Plato and that sort of thing….The reality we can put into words is never reality itself…The atoms or elementary particles themselves are not real; they form a world of potentialities or possibilities rather than one of things or facts...If we omitted all that is unclear, we would probably be left completely uninteresting and trivial tautologies...
The 2012 article Violation of Heisenberg’s Measurement-Disturbance Relationship by Weak Measurements by Lee A. Rozema et al, informs us:
• The Heisenberg Uncertainty Principle is one of the cornerstones of quantum mechanics.
• In his original paper on the subject, Heisenberg wrote “At the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum. This change is the greater the smaller the wavelength of the light employed, i.e., the more exact the determination of the position”.
• The modern version of the uncertainty principle proved in our textbooks today, however, deals not with the precision of a measurement and the disturbance it introduces, but with the intrinsic uncertainty any quantum state must possess, regardless of what measurement (if any) is performed.
• It has been shown that the original formulation is in fact mathematically incorrect.
OK, so we learn that Heisenberg's Uncertainty Principle (in its original formulation presumably) is a cornerstone of quantum physics, which however is mathematically incorrect, and that there is a modern version not concerned with measurement but with an intrinsic uncertainty of an quantum state regardless of measurement. In other words, a corner stone of quantum mechanics has been moved.
• The uncertainty principle (UP) occupies a peculiar position on physics. On the one hand, it is often regarded as the hallmark of quantum mechanics.
• On the other hand, there is still a great deal of discussion about what it actually says.
• A physicist will have much more difficulty in giving a precise formulation than in stating e.g. the principle of relativity (which is itself not easy).
• Moreover, the formulation given by various physicists will differ greatly not only in their wording but also in their meaning.
We learn that the uncertainty of the uncertainty principle has been steadily increasing ever since it was formulated by Heisenberg in 1927.
In a recent series of posts based on Computational Blackbody Radiation I have suggested a new approach to the uncertainty principle as a high-frequency cut-off condition of the form
where $\nu$ is frequency, $T$ temperature in Kelvin $K$ and $\hat h=4.8\times 10^{-11}Ks$ is a scaled Planck's constant, and the significance of the cut-off is that a body of temperature $T\, K$ cannot emit frequencies larger than $\frac{T}{h}$ because the wave synchronization required for emission is destroyed by internal friction damping these frequencies. The cut-off condition thus expresses Wien's displacement law.
The cut-off condition can alternatively be expressed as
where $u_\nu$ is amplitude and $\dot u_\nu =\frac{du_\nu}{dt}$ momentum of a wave of frequency $\nu$ with $\dot u_\nu^2 =T$ and $\dot u_\nu =\nu u_\nu$. We see that the cut-off condition has superficially a form similar to Heisenberg's uncertainty principle, but that the meaning is entirely different and in fact familiar as Wien's displacement law.
We thus find that Heisenberg's uncertainty principle can be replaced by Wien's displacement law, which can be seen as an effect of internal friction preventing synchronization and thus emission of frequencies $\nu > \frac{T}{\hat h}$.
The high-frequency cut-off condition with its dependence on temperature is similar to high-frequency damping of a loud speaker which can depend on the level of the sound.
onsdag 12 mars 2014
Blackbody Radiation as Collective Vibration Synchronized by Resonance
There are two descriptions of the basic phenomenon of a radiation from a heated body (blackbody or greybody radiation) starting from a description of light as a stream of light particles named photons or as electromagnetic waves.
That the particle description of light is both primitive and unphysical was well understood before Einstein in 1905 suggested an explanation of the photoelectric effect based on light as a stream of particles later named photons, stimulated by Planck's derivation of Planck's law in 1900 based on radiation emitted in discrete quanta. However, with the development of quantum mechanics as a description of atomistic physics in the 1920s, the primitive and unphysical idea of light as a stream of particles was turned into a trademark of modern physics of highest insight.
The standpoint today is that light is both particle and wave, and the physicist is free to choose the description which best serves a given problem. In particular, the particle description is supposed to serve well to explain the physics of both blackbody radiation and photoelectricity. But since the particle description is primitive and unphysical, there must be something fishy about the idea that emission of radiation from a heated body results from emission of individual photons from individual atoms together forming a stream of photons leaving the body. We will return to the primitivism of this view after a study of the more educated idea of light as an (electromagnetic) wave phenomenon.
This more educated view is presented on Computational Blackbody Radiation with the following basic message:
1. Radiation is a collective phenomenon generated from in-phase oscillations of atoms in a structured web of atoms synchronized by resonance.
2. A radiating web of atoms acts like a system of tuning forks which tend to vibrate in phase as a result of resonance by acoustic waves. A radiating web of atoms acts like a swarm of cikadas singing in phase.
3. A radiating body has a high-frequency cut-off scaling with temperature of the form $\nu > \frac{T}{\hat h}$ with $\hat h = 4.8 \times 10^{-11}\, Ks$,where $\nu$ is frequency and $T$ temperature in degree Kelvin $K$, which translates to a wave-length $\lambda < \hat h\frac{c}{T}\, m$ as smallest correlation length for synchronization, where $c\, m/s$ is the speed of light. For $T =1500 K$ we get $\lambda \approx 10^{-5}\ m$ which is about 20 times the wave length of visible light.
We can now understand that the particle view is primitive because it is unable to explain that the outgoing radiation consists of electromagnetic waves which are in-phase. If single atoms are emitting single photons there is no mechanism ensuring that corresponding particles/waves are in-phase, and so a most essential element is missing.
The analysis of Computational Blackbody Radiation shows that an ideal blackbody is characterized as a body which is (i) not reflecting and (ii) has a maximal high frequency cut-off. It is observed that the emission from a hole in a cavity with graphite walls is a realization of a blackbody. This fact can be understood as an effect of the regular surface structure of graphite supporting collective atom oscillations synchronized by resonance on an atomic surface web of smallest mesh size $\sim 10^{-9}$. |
cf0afa44704ae4fa | Wednesday, May 23, 2012
Notices of the American Mathematical Society Volume 52, Number 9 published a paper in which Mr. Mason A. Porter and Mr. Predrag Cvitanovic had shown that the theory of dynamical systems used to design trajectories of space flights and the theory of transition states in chemical reactions share the same set of mathematics. We posit that this is a universal phenomenon and every quantum system and phenomenon including superposition, entanglement and spin has macro equivalents. This will be proved, inter alia, by deriving bare mass and bare charge (subjects of Quantum Electrodynamics and Field Theory) without renormalization and without using a counter term, and linking it to dark matter and dark energy (subjects of cosmology). In the process we will give a simple conceptual mechanism for deriving all forces starting from a single source. We also posit that physics has been deliberately made incomprehensible with a preponderance of “mathematical modeling” to match experimental and observational data through back door. Most of the “mathematics” in physics does not conform to mathematical principles.
In a paper “Is Reality Digital or Analogue” published by the FQXi Community on Dec. 29, 2010, we have shown that: uncertainty is not a law of Nature. It is the result of natural laws relating to measurement that reveal a kind of granularity at certain levels of existence that is related to causality. The left hand side of all valid equations or inequalities represents free-will, as we are free to choose (or vary within certain constraints) the individual parameters. The right hand side represents determinism, as the outcome is based on the input in predictable ways. The equality (or inequality) sign prescribes the special conditions to be observed or matched to achieve the desired result. These special conditions, which cannot be always predetermined with certainty or chosen by us arbitrarily, introduce the element of uncertainty in measurements.
When Mr. Heisenberg proposed his conjecture in 1927, Mr. Earle Kennard independently derived a different formulation, which was later generalized by Mr. Howard Robertson as: σ(q)σ(p) ≥ h/4π. This inequality says that one cannot suppress quantum fluctuations of both position σ(q) and momentum σ(p) lower than a certain limit simultaneously. The fluctuation exists regardless of whether it is measured or not implying the existence of a universal field. The inequality does not say anything about what happens when a measurement is performed. Mr. Kennard’s formulation is therefore totally different from Mr. Heisenberg’s. However, because of the similarities in format and terminology of the two inequalities, most physicists have assumed that both formulations describe virtually the same phenomenon. Modern physicists actually use Mr. Kennard’s formulation in everyday research but mistakenly call it Mr. Heisenberg’s uncertainty principle. “Spontaneous” creation and annihilation of virtual particles in vacuum is possible only in Mr. Kennard’s formulation and not in Mr. Heisenberg’s formulation, as otherwise it would violate conservation laws. If it were violated experimentally, the whole of quantum mechanics would break down.
The uncertainty relation of Mr. Heisenberg was reformulated in terms of standard deviations, where the focus was exclusively on the indeterminacy of predictions, whereas the unavoidable disturbance in measurement process had been ignored. A correct formulation of the error–disturbance uncertainty relation, taking the perturbation into account, was essential for a deeper understanding of the uncertainty principle. In 2003 Mr. Masanao Ozawa developed the following formulation of the error and disturbance as well as fluctuations by directly measuring errors and disturbances in the observation of spin components: ε(q)η(p) + σ(q)η(p) + σ(p)ε(q) ≥ h/.
Mr. Ozawa’s inequality suggests that suppression of fluctuations is not the only way to reduce error, but it can be achieved by allowing a system to have larger fluctuations. Nature Physics (2012) (doi:10.1038/nphys2194) describes a neutron-optical experiment that records the error of a spin-component measurement as well as the disturbance caused on another spin-component. The results confirm that both error and disturbance obey the new relation but violate the old one in a wide range of experimental parameters. Even when either the source of error or disturbance is held to nearly zero, the other remains finite. Our description of uncertainty follows this revised formulation.
While the particles and bodies are constantly changing their alignment within their confinement, these are not always externally apparent. Various circulatory systems work within our body that affects its internal dynamics polarizing it differently at different times which become apparent only during our interaction with other bodies. Similarly, the interactions of subatomic particles are not always apparent. The elementary particles have intrinsic spin and angular momentum which continually change their state internally. The time evolution of all systems takes place in a continuous chain of discreet steps. Each particle/body acts as one indivisible dimensional system. This is a universal phenomenon that creates the uncertainty because the internal dynamics of the fields that create the perturbations are not always known to us. We may quote an example.
Imagine an observer and a system to be observed. Between the two let us assume two interaction boundaries. When the dimensions of one medium end and that of another medium begin, the interface of the two media is called the boundary. Thus there will be one boundary at the interface between the observer and the field and another at the interface of the field and the system to be observed. In a simple diagram, the situation can be schematically represented as shown below:
Here O represents the observer and S the system to be observed. The vertical lines represent the interaction boundaries. The two boundaries may or may not be locally similar (have different local density gradients). The arrows represent the effect of O and S on the medium that leads to the information exchange that is cognized as observation.
The system being observed is subject to various potential (internal) and kinetic (external) forces which act in specified ways independent of observation. For example chemical reactions take place only after certain temperature threshold is reached. A body changes its state of motion only after an external force acts on it. Observation doesn’t affect these. We generally measure the outcome – not the process. The process is always deterministic. Otherwise there cannot be any theory. We “learn” the process by different means – observation, experiment, hypothesis, teaching, etc, and develop these into cognizable theory. Heisenberg was right that “everything observed is a selection from a plentitude of possibilities and a limitation on what is possible in the future”. But his logic and the mathematical format of the uncertainty principle: ε(q)η(p) ≥ h/4π are wrong.
The observer observes the state at the instant of second perturbation – neither the state before nor after it. This is because only this state, with or without modification by the field, is relayed back to him while the object continues to evolve in time. Observation records only this temporal state and freezes it as the result of observation (measurement). Its truly evolved state at any other time is not evident through such observation. With this, the forces acting on it also remain unknown – hence uncertain. Quantum theory takes these uncertainties into account. If ∑ represents the state of the system before and ∑ ± ∑ represents the state at the instant of perturbation, then the difference linking the transformations in both states (treating other effects as constant) is minimum, if ∑<<∑. If I is the impulse selected by the observer to send across the interaction boundary, then ∑ must be a function of I: i.e. ∑ = f (I). Thus, the observation is affected by the choices made by the observer also.
The inequality: ε(q)η(p) ≥ h/4π or as it is commonly written: δx. δpħ permits simultaneous determination of position along x-axis and momentum along the y-axis; i.e., δx. δpy = 0. Hence the statement that position and momentum cannot be measured simultaneously is not universally valid. Further, position has fixed coordinates and the axes are fixed arbitrarily. The dimensions remain invariant under mutual transformation. Position along x-axis and momentum along y-axis can only be related with reference to a fixed origin (0, 0). If one has a non-zero value, the other has indeterminate (or relatively zero) value (if it has position say x = 5 and y = 7, then it implies that it has zero relative momentum, otherwise either x or y or both would not be constant, but will have extension). Multiplying both, the result will always be zero. Thus no mathematics is possible between position (fixed coordinates) and momentum (mobile coordinates) as they are mutually exclusive in space and time. They do not commute. Hence, δx.δpy = 0.
Uncertainty is not a law of Nature. We can’t create a molecule from any combination of atoms as it has to follow certain “special conditions”. The conditions may be different like the restrictions on the initial perturbation sending the signal out or the second perturbation leading to the reception of the signal back for comparison because the inputs may be different like c+v and c-v or there may be other inhibiting factors like a threshold limit for interaction. These “special conditions” and external influences that regulate and influence all actions and are unique by themselves, and not the process of measurement, create uncertainty. The disturbances arising out of the process of measurement are operational (technological) in nature and not existential for the particles.
Number is a property of all substances by which we differentiate between similars. If there are no similars, it is one. If there are similars, the number is many. Depending upon the sequence of perception of “one’s”, many can be 2, 3, 4…n etc. Mathematics is accumulation and reduction of similars, i.e., numbers of the same class of objects (like atomic number or mass number), which describes the changes in the physical phenomena or object when the numbers of any of the parameters are changed.
Mathematics is related to the result of measurement. Measurement is a conscious process of comparison between two similar quantities, one of which is called the scaling constant (unit). The cognition part induces the action leading to comparison, the reaction of which is again cognized as information. There is a threshold limit for such cognition. Hence Nature is mathematical in some perceptible ways. This has been proved by the German physiologist Mr. Ernst Heinrich Weber, who measured human response to various physical stimuli. Carrying out experiments with lifting increasing weights, he devised the formula: ds = k (dW / W), where ds is the threshold increase in response (the smallest increase still discernible), dW the corresponding increase in weight, W the weight already present and k the proportionality constant. This has been developed as the Weber-Fechner law. This shows that the conscious response follows a somewhat logarithmic law. This has been successfully applied to a wide range of physiological responses.
Measurement is not the action of putting a scale to a rod, which is a mechanical action. Measurement is a conscious process of reaching an inference based on the action of comparison of something with an appropriate unit at “here-now”. The readings of a particular aspect, which indicate a specific state of the object at a designated instant, (out of an infinite set of temporally evolving states), is frozen for use at other times and is known as the “result of measurement”. The states relating to that aspect at all “other times”, which cannot be measured; hence remain unknown, are clubbed together and are collectively referred to as the “superposition of states” (we call it adhyaasa). This concept has not only been misunderstood, but also unnecessarily glamorized and made incomprehensible in the “undead” Schrödinger’s cat and other examples. The normal time evolution of the cat (its existential aspect) and the effect of its exposure to poisonous gas (the operational aspect) are two different unrelated aspects of its history. Yet these unrelated aspects have been coupled to bring in a state of coupled-superposition (we call it aadhyaasika taadaatmya), which is mathematically, physically and conceptually void.
Mathematics is related to accumulation and reduction of numbers. Since measurements are comparison between similar quantities, mathematics is possible only between similars (linear) or partly similars (non-linear) but never between the dissimilars. We cannot add or multiply 3 protons and 3 neutrons. They can be added only by taking their common property of mass to give mass number. These accumulation and reduction of numbers are expressed as the result of measurement after comparison with a scaling constant (standard unit) having similar characteristics (such as length compared with unit length, area with unit area, volume with unit volume, density with unit density, interval with unit interval, etc). The results of measurements are always pure numbers, i.e., scalar quantities, because the dimensions of the scaling constants are same for both the measuring device and the object being measured and measurement is only the operation of scaling up or down the unit for an appropriate number of times. Thus, mathematics explains only “how much” one quantity accumulates or reduces in an interaction involving similar or partly similar quantities and not “what”, “why”, “when”, “where”, or “with whom” about the objects involved in such interactions. These are the subject matters of physics. We will show repeatedly that in modern physics there is a mismatch and mix-up between the data, the mathematics and the physical theory.
Quantum physics implied that physical quantities usually have no values until they are observed. Therefore, the observer must be intrinsically involved in the physics being observed. This has been wrongly interpreted to mean that there might be no real world in the absence of an observer! When we measure a particular quantity, we come up with a specific value. This value is “known” only after the conscious or sentient content is added to the measurement. Thus, it is reasonable to believe that when we do not measure or perceive, we do not “know” the value – there is no operation of the conscious or sentient content is inert - and not that the quantity does not have any existential value. Here the failure of the physicists to find the correct “mathematics” to support their “theory” has been put forth as a pretext for denying reality. Mathematics is an expression of Nature, not its sole language. Though observer has a central role in Quantum theories, its true nature and mechanism has eluded the scientists. There cannot be an equation to describe the observer, the glory of the rising sun, the grandeur of the towering mountain, the numbing expanse of the night sky, the enchanting fragrance of the wild flower or the endearing smile on the lips of the beloved. It is not the same as any physical or chemical reaction or curvature of lips.
Mathematics is often manipulated to spread the cult of incomprehensibility. The electroweak theory is extremely speculative and uses questionable mathematics as a cover for opacity to predict an elusive Higg’s mechanism. Yet, tens of millions of meaningless papers have been read out in millions of seminars world wide based on such unverified myth for a half century and more wasting enormous amounts of resources that could otherwise have been used to make the Earth a better place to live. The physicists use data from the excellent work done by experimental scientists to develop theories based on reverse calculation to match the result. It is nothing but politics of physics – claim credit for bringing in water in the river when it rains. Experiment without the backing of theory is blind. It can lead to disaster. Rain also brings floods. Experiments guided by economic and military considerations have brought havoc to our lives.
We don’t see the earlier equations in their original format because all verified inverse square laws are valid only in spherically symmetric emission fields that rule out virtual photons and messenger photons etc. Density is a relative term and relative density is related to volume, which is related to diameter. Scaling up or down the diameter brings in corresponding changes in relative density. This gives rise to inverse square laws in a real emission field. The quanta cannot spontaneously emit other quanta without violating conservation laws. This contradicts the postulates of QED and QFT. The modern physicists are afraid of reality. To cover up for their inadequacies, the equations have been rewritten using different unphysical notations to make it incomprehensible for even those making a career out of it. Reductionism, superstitious belief in the validity of “accepted theories” and total reliance on them, and the race for getting recognition at the earliest by any means, compound the problem. Thus, while the “intellectual supremacy (?)” of the “establishment scientists” is reinforced before “outsiders”, it goes unchallenged by even their own community.
The modern physicists disregard even reality. Example: in “Reviews of Modern Physics”, Volume 77, July 2005, p. 839, Mr. Gell-Mann says: “In order to obtain such relations that we conjecture to be true, we use the method of abstraction from a Lagrangian field-theory model. In other words, we construct a mathematical theory of the strongly interacting particles, which may or may not have anything to do with reality, find suitable algebraic relations that hold in the model, postulate their validity, and then throw away the model. We may compare this process to a method sometimes employed in French cuisine: a piece of pheasant meat is cooked between two slices of veal, which are then discarded”. Is it physics? Thankfully, he has not differentiated between the six different categories of veal: Prime, Choice, Good, Standard, Utility and Cull linking it to the six quarks. Veal is used in the cuisine because of its lack of natural fat, delicate flavor and fine texture. These qualities creep into the pheasant meat even after the veal is discarded. But what Mr. Gell-Mann proposes is: use A to prove B. Then throw away A! B cannot stand without A. It is the ground for B.
A complete theory must have elements of the theory corresponding to every element of reality over and above those implicit in the so-called wave-function. Mr. David Hilbert argues: “Mathematical existence is merely freedom from contradiction”. This implies that mathematical structures simply do not exist unless they are logically consistent. The validity of a mathematical statement is judged by its logical consistency. The validity of a physical statement is judged by its correspondence to reality. Russell’s paradox and other paradoxes - such as the Zermelo-Frankel set theory that avoids the Russell’s paradox - point out that mathematics on its own does not lead to a sensible universe. We must apply constraints in order to obtain consistent physical reality from mathematics. Unrestricted axioms lead to Russell’s paradox. Manipulation of mathematics to explain physics has violated the principle of logical consistency in most cases. One example is renormalization or elimination of infinities using a “counter term”, which is logically not consistent, as mathematically all operations involving infinity are void. Some describe it as divergence linking it to the concept of limit. We will show that the problem with infinities can be solved in mathematically consistent ways without using a “counter term” by re-examining the concept of limit.
Similarly, Mr. Feynman’s sum-over histories is the “sum of the particle’s histories” in imaginary time rather than in real time. Feynman had to do the sum in imaginary time because he was following Mr. Minkowski, who assigned time to the imaginary axis. That is the four vector field in GR. Mr. Minkowski assigned time to that axis to make the field symmetrical. It was a convenience for him, not a physical necessity or reality. But once it is done, it continued to de-normalize everything. Mr. Feynman was not using imaginary time; he was using real time, but assigned it to the imaginary axis. The theory gets the correct answer up to a certain limit not because it is correct, but because it had been proposed through back calculation from experimental results. The gaps and the greater technical difficulties of trying to sum these in real time are avoided through technical jargon. These greater technical difficulties are also considered as a form of renormalization, but they require infinite renormalization, which is mathematically not valid. Mr. Feynman’s renormalization is heuristics: “mathematics” specially designed to explain a limited set of data.
Mathematics is also related to the measurement of time evolution of the state of something. These time evolutions depict rate of change. When such change is related to motion; like velocity, acceleration, etc, it implies total displacement from the position occupied by the body and moving to the adjacent position. This process is repeated due to inertia till it is modified by the introduction of other forces. Thus, these are discrete steps that can be related to three dimensional structures only. Mathematics measures only the numbers of these steps, the distances involved including amplitude, wave length, etc and the quanta of energy applied etc. Mathematics is related also to the measurement of area or curves on a graph – the so-called mathematical structures, which are two dimensional structures. Thus, the basic assumptions of all topologies, including symplectic topology, linear and vector algebra and the tensor calculus, all representations of vector spaces, whether they are abstract or physical, real or complex, composed of whatever combination of scalars, vectors, quaternions, or tensors, and the current definition of the point, line, and derivative are necessarily at least one dimension less from physical space.
The graph may represent space, but it is not space itself. The drawings of a circle, a square, a vector or any other physical representation, are similar abstractions. The circle represents only a two dimensional cross section of a three dimensional sphere. The square represents a surface of a cube. Without the cube or similar structure (including the paper), it has no physical existence. An ellipse may represent an orbit, but it is not the dynamical orbit itself. The vector is a fixed representation of velocity; it is not the dynamical velocity itself, and so on. The so-called simplification or scaling up or down of the drawing does not make it abstract. The basic abstraction is due to the fact that the mathematics that is applied to solve physical problems actually applies to the two dimensional diagram, and not to the three dimensional space. The numbers are assigned to points on the piece of paper or in the Cartesian graph, and not to points in space. If one assigns a number to a point in space, what one really means is that it is at a certain distance from an arbitrarily chosen origin. Thus, by assigning a number to a point in space, what one really does is assign an origin, which is another point in space leading to a contradiction. The point in space can exist by itself as the equilibrium position of various forces. But a point on a paper exists only with reference to the arbitrarily assigned origin. If additional force is applied, the locus of the point in space resolves into two equal but oppositely directed field lines. But the locus of a point on a graph is always unidirectional and depicts distance – linear or non-linear, but not force. Thus, a physical structure is different from its mathematical representation.
The word vacuum has always been used to mean “the thing that is not material or particulate”. By definition, the vacuum is supposed to be nothing, but often it is used to mean something. This is a contradiction because it begs the paradox of Parmenides: If the vacuum is composed of virtual particle pairs, then it no longer is the vacuum: it is matter. If everything is matter, then we have a plenum in which motion is impossible. Calling this matter “virtual” is camouflage. When required to be transparent, treat it as nothing and when it is required to have physical characteristics (like polarity), treat it as something! Defining something as both x and non-x is not physics.
There is no surprise that the equations of QCD remain unsolved at energy scales relevant for describing atomic nuclei! The various terms of QCD like “color”, “flavor”, the strangeness number (S) and the baryon number (B) etc, are not precisely defined and cannot be mechanically assigned. Even spin cannot be mechanically assigned for quarks except assigning a number. The quantum spin is said to be not real since quarks are point like and cannot spin. If quarks cannot spin, how does chirality and symmetry apply to them at this level? How can a point express chirality and how can a point be either symmetrical or non-symmetrical? If W bosons that fleetingly mediate particles have been claimed to leave their foot-prints, quarks should be more stable! But single quarks have never been seen in bubble chambers, ionization chambers, or any other experiments. We will explain the mechanism of spin (1/6 for quarks) to show that it has macro equivalents and that spin zero means absence of spin – which implies only mass-less energy transfer.
Objects in three dimensional spaces evolve in time. Mathematical structures in two dimensions do not evolve in time – it only gets mechanically scaled up or down. Hawking and others were either confused or trying to fool others when they suggested “time cone” and “event horizon” by manipulating a two dimensional structure and suggesting a time evolution and then converting it to a three dimensional structure. Time, unlike distance that is treated as space in a graph, is an independent variable. We cannot plot or regulate time. We can only measure time or at best accommodate our actions in time. A light pulse in two dimensional field evolves in time as an expanding circle and not as a conic section. In three dimensions, it will be an expanding sphere and not a cone. The reverse direction will not create a reverse cone, but a smaller sphere. Thus, their concept of time cone is not even a valid mathematical representation of physical reality. Researchers have found a wide variety of stellar collapse scenarios in which an event horizon does not form, so that the singularity remains exposed to our view. Physicists call it a “naked singularity”. In such a case, Matter and radiation can both fall in and come out, whereas matter falling into the singularity inside a black hole would land in a one-way trip. Thus, “naked singularity” proves the concept of “even horizon wrong”.
The description of the measured state at a given instant is physics and the use of the magnitude of change at two or more designated instants to predict the outcome at other times is mathematics. But the concept of measurement has undergone a big change over the last century leading to changes in “mathematics of physics”. It all began with the problem of measuring the length of a moving rod. Two possibilities of measurement suggested by Mr. Einstein in his 1905 paper were:
(a) “The observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod, in just the same way as if all three were at rest”, or
(b) “By means of stationary clocks set up in the stationary system and synchronizing with a clock in the moving frame, the observer ascertains at what points of the stationary system the two ends of the rod to be measured are located at a definite time. The distance between these two points, measured by the measuring-rod already employed, which in this case is at rest, is the length of the rod”
The method described at (b) is misleading. We can do this only by setting up a measuring device to record the emissions from both ends of the rod at the designated time, (which is the same as taking a photograph of the moving rod) and then measure the distance between the two points on the recording device in units of velocity of light or any other unit. But the picture will not give a correct reading due to two reasons:
· If the length of the rod is small or velocity is small, then length contraction will not be perceptible according to the formula given by Einstein.
· If the length of the rod is big or velocity is comparable to that of light, then light from different points of the rod will take different times to reach the recording device and the picture we get will be distorted due to different Doppler shift. Thus, there is only one way of measuring the length of the rod as in (a).
Here also we are reminded of an anecdote relating to a famous scientist, who once directed two of his students to precisely measure the wave-length of sodium light. Both students returned with different results – one resembling the normally accepted value and the other a different value. Upon enquiry, the other student replied that he had also come up with the same result as the accepted value, but since everything including the Earth and the scale on it is moving, for precision measurement he applied length contraction to the scale treating the star Betelgeuse as a reference point. This changed the result. The scientist told him to treat the scale and the object to be measured as moving with the same velocity and recalculate the wave-length of light again without any reference to Betelgeuse. After sometime, both the students returned to tell that the wave-length of sodium light is infinite. To a surprised scientist, they explained that since the scale is moving with light, its length would shrink to zero. Hence it will require an infinite number of scales to measure the wave-length of sodium light!
Some scientists we have come across try to overcome this difficulty by pointing out that length contraction occurs only in the direction of motion. They claim that if we hold the rod in a transverse direction to the direction of motion, then there will be no length contraction. But we fail to understand how the length can be measured by holding the rod in a transverse direction. If the light path is also transverse to the direction of motion, then the terms c+v and c-v vanish from the equation making the entire theory redundant. If the observer moves together with the given measuring-rod and the rod to be measured, and measures the length of the rod directly by superposing the measuring-rod while moving with it, he will not find any difference because the length contraction, if real, will be in the same proportion for both.
The fallacy in the above description is that if one treats “as if all three were at rest”, one cannot measure velocity or momentum, as the object will be relatively as rest, which means zero relative velocity. Either Mr. Einstein missed this point or he was clever enough to camouflage this, when, in his 1905 paper, he said: “Now to the origin of one of the two systems (k) let a constant velocity v be imparted in the direction of the increasing x of the other stationary system (K), and let this velocity be communicated to the axes of the co-ordinates, the relevant measuring-rod, and the clocks”. But is this the velocity of k as measured from k, or is it the velocity as measured from K? This question is extremely crucial. K and k each have their own clocks and measuring rods, which are not treated as equivalent by Mr. Einstein. Therefore, according to his theory, the velocity will be measured by each differently. In fact, they will measure the velocity of k differently. But Mr. Einstein does not assign the velocity specifically to either system. Everyone missed it and all are misled. His spinning disk example in GR also falls for the same reason.
Mr. Einstein uses a privileged frame of reference to define synchronization and then denies the existence of any privileged frame of reference. We quote from his 1905 paper on the definition of synchronization: “Let a ray of light start at the “A time” tA from A towards B, let it at the “B time” tB be reflected at B in the direction of A, and arrive again at A at the “A time” t’A. In accordance with definition the two clocks synchronize if: tB - tA = t’A - tB.”
1. If the clock at B synchronizes with the clock at A, the clock at A synchronizes with the clock at B.
2. If the clock at A synchronizes with the clock at B and also with the clock at C, the clocks at B and C also synchronize with each other.”
The concept of relativity is valid only between two objects. Introduction of a third object brings in the concept of privileged frame of reference and all equations of relativity fall. Yet, Mr. Einstein precisely does the same while claiming the very opposite. In the above description, the clock at A is treated as a privileged frame of reference for proving synchronization of the clocks at B and C. Yet, he claims it is relative!
The cornerstone of GR is the principle of equivalence. It has been generally accepted without much questioning. But if we analyze the concept scientifically, we find a situation akin to the Russell’s paradox of Set theory, which raises an interesting question: If S is the set of all sets which do not have themselves as a member, is S a member of itself? The general principle (discussed in our book Vaidic Theory of Numbers) is that: there cannot be many without one, meaning there cannot be a set without individual elements (example: a library – collection of books – cannot exist without individual books). In one there cannot be many, implying, there cannot be a set of one element or a set of one element is superfluous (example: a book is not a library) - they would be individual members unrelated to each other as is a necessary condition of a set. Thus, in the ultimate analysis, a collection of objects is either a set with its elements, or individual objects that are not the elements of a set.
Let us examine set theory and consider the property p(x): x Ï x, which means the defining property p(x) of any element x is such that it does not belong to x. Nothing appears unusual about such a property. Many sets have this property. A library [p(x)] is a collection of books. But a book is not a library [x Ï x]. Now, suppose this property defines the set R = {x : x Ï x}. It must be possible to determine if RÎR or RÏR. However if RÎR, then the defining properties of R implies that RÏR, which contradicts the supposition that RÎR. Similarly, the supposition RÏR confers on R the right to be an element of R, again leading to a contradiction. The only possible conclusion is that, the property “x Ï x” cannot define a set. This idea is also known as the Axiom of Separation in Zermelo-Frankel set theory, which postulates that; “Objects can only be composed of other objects” or “Objects shall not contain themselves”. This concept has been explained in detail with examples in the chapter on motion in the ancient treatise “Padaartha Dharma Samgraha” – Compendium on Properties of Matter written by Aachaarya Prashastapaada.
In order to avoid this paradox, it has to be ensured that a set is not a member of itself. It is convenient to choose a “largest” set in any given context called the universal set and confine the study to the elements of such universal set only. This set may vary in different contexts, but in a given set up, the universal set should be so specified that no occasion arises ever to digress from it. Otherwise, there is every danger of colliding with paradoxes such as the Russell’s paradox. Or as it is put in the everyday language: “A man of Serville is shaved by the Barber of Serville if and only if the man does not shave himself?”
There is a similar problem in the theory of General Relativity and the principle of equivalence. Inside a spacecraft in deep space, objects behave like suspended particles in a fluid or like the asteroids in the asteroid belt. Usually, they are relatively stationary in the medium unless some other force acts upon them. This is because of the relative distribution of mass inside the spacecraft and its dimensional volume that determines the average density at each point inside the spacecraft. Further the average density of the local medium of space is factored into in this calculation. The light ray from outside can be related to the space craft only if we consider the bigger frame of reference containing both the space emitting light and the spacecraft. If the passengers could observe the scene outside the space-craft, they will notice this difference and know that the space craft is moving. In that case, the reasons for the apparent curvature will be known. If we consider outside space as a separate frame of reference unrelated to the space craft, the ray emitted by it cannot be considered inside the space craft (we call it praagaabhaava). The emission of the ray will be restricted to those emanating from within the spacecraft. In that case, the ray will move straight inside the space craft. In either case, the description of Mr. Einstein is faulty. Thus, both SR and GR including the principles of equivalence are wrong descriptions of reality. Hence all mathematical derivatives built upon these wrong descriptions are also wrong. We will explain all so-called experimental verifications of the SR and GR by alternative mechanisms or other verifiable explanations.
Relativity is an operational concept, but not an existential concept. The equations apply to data and not to particles. When we approach a mountain from a distance, its volume appears to increase. What this means is that the visual perception of volume (scaling up of the angle of incoming radiation) changes at a particular rate. But locally, there is no such impact on the mountain. It exists as it was. The same principle applies to the perception of objects with high velocities. The changing volume is perceived at different times depending upon our relative velocity. If we move fast, it appears earlier. If we move slowly, it appears later. Our differential perception is related to changing angles of radiation and not the changing states of the object. It does not apply to locality. Einstein has also admitted this. But the Standard model treats these as absolute changes that not only change the perceptions, but change the particle also!
The above description points to some very important concepts. If the only way to measure is to move with the object of measurement or allow it to pass between two points at two instants (and measure the time and distance for comparison), it implies that all measurements can be done only at “here-now”. Since “here-now” is ever changing, how do we describe the result? We cut out an easily perceived and fairly repetitive segment of it and freeze it or its subdivisions for future reference as the scaling constant (unit). We compare all future states (also past, where it had been measured) with this constant and call the result of such comparison as the “result of measurement”. The operations involving such measurement are called mathematics. Since the result of measurement can only be scalar quantities, i.e., numbers, mathematics is the science of numbers. Since numbers are always discrete units, and the objects they represent are bound by different degrees of freedom, mathematics must follow these principles. But in most of the “mathematics” used by the physicists, these principles are totally ignored.
Let us take the example of complex numbers. The imaginary are abstract descriptions and are illusions that can never be embodied in the “phenomena” because they do not conform to the verifiable laws of the phenomena in nature. Conversely, only the real can be embodied in the verifiable phenomena. A negative sign assigned to a number points to the “deficiency of a physical characteristic” at “here-now”. Because of conservation laws, the negative sign must include a corresponding positive sign “elsewhere”. While the deficiency is at “here-now”, the corresponding positive part is not at “here-now”. They seek each other out, which can happen only in “other times”.
Let us take the example of an atom. Generally, we never talk about the total charge of a particle - we describe only the net charge. Thus, when we describe a positively or negatively charged ion, we mean that the particle has both the charges, but the magnitude of one category of charge is more than that of the other. The positively charged proton is deficient in negative charge, i.e., it has a charge of –(–1) in electron charge units. This double negative appears as the positive charge (actually, the charge of proton is slightly deficient from +1). We posit that the negative potential is the real and the only charge. Positive potential is perceived due to relative deficiency (we call it nyoona) of negative potential. We will discuss this statement while explaining what an electron is. The proton tries to fulfill its relative deficiency by uniting with an electron to become a neutron (or hydrogen atom, which is also unstable because of the deficiency). The proton-neutron interaction is dependent upon neutrinos-antineutrinos. Thus, there is a deficiency of neutrinos-antineutrinos. The neutron and proton-electron pairs search for it. This process goes on. At every stage, there is an addition, which leads to a corresponding “release” leading to fresh deficiency in a linear mechanism. Thus, the nuclei weigh less than their constituents. This deficiency is known as the mass defect, which represents the energy released when the nucleus is formed. The deficiency generates the charge that is the cause for all other forces and non-linear interactions.
The operation of deficiency leads to linear addition with corresponding subtraction. This is universally true for everything and we can prove it. Hence a deficiency cannot be reduced in a non-linear manner. This is because both positive and negative potentials do not separately exist at “here-now”, where the mathematics is done. They must be separated in space or exist as net charge. For this reason, negative numbers (–1) cannot be reduced non-linearly (√–1). Also why stop only at square-root? Why not work out fourth, eighth etc, roots ad infinitum? For numbers other than 1, they will not give the same result. This means complex numbers are restricted to (√–1). Since (–1) does not exist at “here-now”, no mathematics is possible with it. Thus, the complex numbers are neither physical nor mathematical. This is proved by the fact that complex numbers cannot be used in computer programming, which mimics conscious processes of measurement. Since mathematics is done by conscious beings, there cannot be mathematics involving un-physical complex numbers.
To say that complex numbers are “complete”, because they “include real numbers and more” is like saying dreams are “complete”, because they “include what we perceive in wakeful state and more”. Inertia is a universal law of Nature that arises after all actions. Thought is the inertia of mind, which is our continued response to initial external stimuli. During wakeful state, the “conscious actions” involve perception through sense organs, which are nothing but measurement of the fields set up by the objects by the corresponding fields set up by our respective sense organs at “here-now”. Thus, any inertia they generate is bound by not only the existential physical characteristics of the objects of perception, but also the intervening field. During dreams, the ocular interaction with external fields ceases, but their memory causes inertia of mind due to specific tactile perception during sleep. Thus, we dream of only whatever we have seen in our wakeful state. Since memory is a frozen state (saakshee) like a scaling constant and is free from the restrictions imposed by the time evolving external field, dreams are also free from these restrictions. We have seen horses that run and birds that fly. In dream, we can generate operational images of flying horses. This is not possible in existential wakeful state. This is not the ways of Nature. This is not physics. This is not mathematics either.
Mr. Dirac proposed a procedure for transferring the characteristic quantum phenomenon of discreteness of physical quantities from the quantum mechanical treatment of particles to a corresponding treatment of fields. Conceptually, such treatment is void, as by definition, a particle is discrete whereas a field is analog. A digitized field is an oxymoron. Digits are always discrete units. What we actually mean by a digitized field is that we measure it in discrete steps unit by unit. Employing the quantum mechanical theory of the harmonic oscillator, Mr. Dirac gave a theoretical description of how photons appear in the quantization of the electromagnetic radiation field. Later, Mr. Dirac’s procedure became a model for the quantization of other fields as well. But the fallacy here is evident. There are some potential ingredients of the particle concept which are explicitly opposed to the corresponding (and therefore opposite) features of the field concept.
A core characteristic of a field is supposed to be that it is a system with an infinite number of degrees of freedom, whereas the very opposite holds true for particles. What this really means is that the field interacts with many particles simultaneously, whereas a particle is placed in and interacts with other particles only through one field (with its sub-fields like electrical or magnetic fields). A particle can be referred to by the specification of the coordinates x(t) that pertains to the time evolution of its center of mass as representative of the particle (pre-supposing its dimensional impenetrability). However, the operator-valued-ness of quantum fields generally mean that to each space-time point x (t), a field value φx (t) is assigned, which is called an operator. Operators are generally treated as mathematical entities which are defined by how they act on something. They do not represent definite values of quantities, but they specify what can be measured. This is a fundamental difference between classical fields and quantum fields because an operator valued quantum field φx (t) does not by itself correspond to definite values of a physical quantity like the strength of an electromagnetic field. The quantum fields are determinables as they are described by mappings from space-time points to operators. This description is true but interpreted wrongly. Left to itself, a particle will continue to be in its state infinitely. It evolves in time because of its interaction with the field due to differential density that appears as charge. Unlike particles, where the density is protected by its dimensions, universal fields (where energy fields are sub-fields called “jaala – literally net”) act like fluids. Hence its density is constantly fluctuating and cannot be precisely defined. Thus, it continuously strives to change the state of the particle, which is its time evolution. The pace of this time evolution is the time dilation for that particle. There is nothing as universal time dilation. Hence we call time as “vastu patita” literally meaning based on changes in objects. Thus, it can be called as an operator.
Another feature of the particle concept is explicitly in opposition to the field concept. In pure particle ontology, the interaction between remote particles can only be understood as an action at a distance. In contrast to that, in field ontology, or a combined ontology of particles and fields, local action is implemented by mediating fields. Further, classical particles are massive and impenetrable, again in contrast to classical fields. The concept of particles has been evolving through history of science in accordance with the latest scientific theories. Therefore, particle interpretation for QFT is a very difficult proposition.
Mr. Wigner’s famous analysis of the Poincaré group is often assumed to provide a definition of elementary particles. Although Mr. Wigner has found a classification of particles, his analysis does not contribute very much to the question “what a particle is” and whether a given theory can be interpreted in terms of particles. What Mr. Wigner has given is rather a conditional answer. If relativistic quantum mechanics can be interpreted in terms of particles, then the possible types of particles correspond to irreducible unitary representations of the Poincaré group. However, the question whether and if yes, in what sense at least, relativistic quantum mechanics can be interpreted as a particle theory at all, has not been addressed in Mr. Wigner’s analysis. For this reason the discussion of the particle interpretation of QFT is not closed with Mr. Wigner’s analysis. For example the pivotal question of the localizability of particle states is still open. Quantum physics has generated much more questions that it has solved.
Each measurable parameter in a physical system is said to be associated with a quantum mechanical operator. Part of the development of quantum mechanics is the establishment of the operators associated with the parameters needed to describe the system. The operator associated with the system energy is called the Hamiltonian. The word operator can in principle be applied to any function. However in practice, it is most often applied to functions that operate on mathematical entities of higher complexity than real number, such as vectors, random variables, or other “mathematical expressions”. The differential and integral operators, for example, have domains and co-domains whose elements are “mathematical expressions of indefinite complexity”. In contrast, functions with vector-valued domains but scalar ranges are called “functionals” and “forms”. In general, if either the domain or co-domain (or both) of a function contains elements significantly more complex than real numbers, that function is referred to as an operator. Conversely, if neither the domain nor the co-domain of a function contains elements more complex than real numbers, that function is referred to simply as a function. Trigonometric functions such as signs, cosine etc., are examples of the latter case. Thus, the operators or Hamiltonian are not mathematical as they do not accumulate or reduce particles by themselves. These are illegitimate manipulations in the name of mathematics.
The Hamiltonian is said to contain the operations associated with both kinetic and potential energies. Kinetic energy is related to motion of the particle – hence uses binomial terms associated with energy and fields. This is involved in interaction with the external field while retaining the identity of the body, with its internal energy, separate from the external field. Potential energy is said to be related to the position of the particle. But it remains confined to the particle even while the body is in motion. The example of pendulum, where potential energy and kinetic energy are shown as interchangeable is a wrong description, as there is no change in the potential energy between the pendulum when it is in motion and when it is at rest.
The motion of the pendulum is due only to inertia. It starts with application of force to disturb the equilibrium position. Then both inertia of motion and inertia of restoration take over. Inertia of motion is generated when the body is fully displaced. Inertia of restoration takes over when the body is partially displaced, like in the pendulum, which remains attached to the clock. This is one of the parameters that cause wave and sound generation through transfer of momentum. As the pendulum swings to one side due to inertia of motion, the inertia of restoration tries to pull it back to its equilibrium position. This determines the speed and direction of motion of the pendulum. Hence the frequency and amplitude depend on the length of the chord (this determines the area of the cross section) and the weight of the pendulum (this determines the momentum). After reaching equilibrium position, the pendulum continues to move due to inertia of motion or restoration. This process is repeated. If the motion is sought to be explained by exchange of PE and KE, then we must account for the initial force that started the motion. Though it ceases to exist, its inertia continues. But the current theories ignore it. The only verifiable explanation is; kinetic energy, which is determined by factors extraneous to the body, does not interfere with the potential energy.
In a Hamiltonian, the potential energy is shown as a function of position such as x or the potential V(x). The spectrum of the Hamiltonian is said to be the set of all possible outcomes when one measures the total energy of a system. A body possessing kinetic energy has momentum. Since position and momentum do not commute, the functions of position and momentum cannot commute. Thus, Hamiltonian cannot represent total energy of the system. Since potential energy remains unchanged even in motion, what the Hamiltonian actually depicts is the kinetic energy only. It is part of the basic structure of quantum mechanics that functions of position are unchanged in the Schrödinger equation, while momenta take the form of spatial derivatives. The Hamiltonian operator contains both time and space derivatives. The Hamiltonian operator for a class of velocity-dependent potentials shows that the Hamiltonian and the energy of the system are not simply related, and while the former is a constant of motion and does not depend on time explicitly, the latter quantity is time-dependent, and the Heisenberg equation of motion is not satisfied.
The spectrum of the Hamiltonian is said to be decomposed via its spectral measures, into a) pure point, b) absolutely continuous, and c) singular parts. The pure point spectrum can be associated to eigen vectors, which in turn are the bound states of the system – hence discrete. The absolutely continuous spectrum corresponds to the so-called free states. The singular spectrum comprises physically impossible outcomes. For example, the finite potential well admits bound states with discrete negative energies and free states with continuous positive energies. When we include un-physical parameters, only such outcomes are expected. Since all three decompositions come out of the same Hamiltonian, it must come through different mechanism. Hence a Hamiltonian cannot be used without referring to the specific mechanism that causes the decompositions.
Function is a relationship between two sets of numbers or other mathematical objects where each member of the first set is paired with only one member of the second set. It is an equation, for which any x that can be plugged into the equation, will yield exactly one y out of the equation - one-to-one correspondence – hence discreteness. Functions can be used to understand how one quantity varies in relation to (is a function of) changes in the second quantity. Since no change is possible without energy, which is said to be quantized, such changes should also be quantized, which imply discreteness involving numbers.
The Lagrangian is used both in celestial mechanics and quantum mechanics. In quantum mechanics, the Lagrangian has been extended into the Hamiltonian. Although Lagrange only sought to describe classical mechanics, the action principle that is used to derive the Lagrange equation is now recognized to be applicable to quantum mechanics. In celestial mechanics, the gravitational field causes both the kinetic energy and the potential energy. In quantum mechanics, charge causes both the kinetic energy and the potential. The potential is the energy contained in a body when it is not in motion. The kinetic energy is the energy contained by the same body when it is put to motion. The motions of celestial bodies are governed by gravitational fields and the potential is said to be gravitational potential. Thus, originally the Lagrangian must have been a single field differential. At its simplest, the Lagrangian is the kinetic energy of a system T minus its potential energy V. In other words, one has to subtract the gravitational potential from gravitational kinetic energy! Is it possible?
Mr. Newton thought that the kinetic energy and the potential energy of a single particle would sum to zero. He solved many problems of the time-varying constraint force required to keep a body (like a pendulum) in a fixed path by equating the two. But in that case, the Lagrangian L = T – V will always be zero or 2T. In both cases it is of no use. To overcome the problem, it has been suggested that Lagrangian only considers the path and chooses a set of independent generalized coordinates that characterize the possible motion. But in that case, we will know about the path, but not about the force.
Despite its much publicized predictive successes, quantum mechanics has been plagued by conceptual difficulties since its inception. No one is really clear about what is quantum mechanics? What does quantum mechanics describe? Since it is widely agreed that any quantum mechanical system is completely described by its wave function, it might seem that quantum mechanics is fundamentally about the behavior of wave functions. Quite naturally, all physicists starting with Mr. Erwin Schrödinger, the father of the wave function, wanted this to be true. However, Mr. Schrödinger ultimately found it impossible to believe. His difficulty was not so much with the novelty of the wave function: “That it is an abstract, unintuitive mathematical construct is a scruple that almost always surfaces against new aids to thought and that carries no great message”. Rather, it was that the “blurring” suggested by the spread out character of the wave function “affects macroscopically tangible and visible things, for which the term ‘blurring’ seems simply wrong” (Schrödinger 1935).
For example, in the same paper Mr. Schrödinger noted that it may happen in radioactive decay that “the emerging particle is described ... as a spherical wave ... that impinges continuously on a surrounding luminescent screen over its full expanse. The screen however does not show a more or less constant uniform surface glow, but rather lights up at one instant at one spot ....”. He observed that one can easily arrange, for example by including a cat in the system, “quite ridiculous cases” with the ψ-function of the entire system having in it the living and the dead cat mixed or smeared out in equal parts. Thus it is because of the “measurement problem” of macroscopic superposition that Schrödinger found it difficult to regard the wave function as “representing reality”. But then what does reality represent? With evident disapproval, Schrödinger describes how the reigning doctrine rescues itself by having recourse to epistemology. We are told that no distinction is to be made between the state of a natural object and what we know about it, or perhaps better, what we can know about it. Actually – it is said - there is intrinsically only awareness, observation, measurement.
One of the assumptions of quantum mechanics is that any state of a physical system and its time evolution is represented by the wave-function, obtained by the solution of time-dependent Schrödinger equation. Secondly, it is assumed that any physical state is represented by a vector in Hilbert space being spanned on one set of Hamiltonian eigenfunctions and all states are bound together with the help of superposition principle. However, if applied to a physical system, these two assumptions exhibit mutual contradiction. It is said that any superposition of two solutions of Schrödinger equation is also a solution of the same equation. However, this statement can have physical meaning only if the two solutions correspond to the same initial conditions.
By superposing solutions belonging to different initial conditions, we obtain solutions corresponding to fully different initial conditions, which imply that significantly different physical states have been combined in a manner that is not allowed. The linear differential equations that hold for general mathematical superposition principles have nothing to do with physical reality, as actual physical states and their evolution is uniquely defined by corresponding initial conditions. These initial conditions characterize individual solutions of Schrödinger equation. They correspond to different properties of a physical system, some of which are conserved during the entire evolution.
The physical superposition principle has been deduced from the linearity of Schrödinger differential equation without any justification. This arbitrary assumption has been introduced into physics without any proof. The solutions belonging to diametrically different initial conditions have been arbitrarily superposed. Such statements like: “quantum mechanics including superposition rules have been experimentally verified” is absolutely wrong. All tests hitherto have concerned only consequences following from the Schrödinger equation.
The measurement problem in quantum physics is really not a problem, but the result of wrong assumptions. As has been described earlier, measurement is done only at “here-now”. It depicts the state only at “here-now” – neither before nor after it. Since all other states are unknown, they are clubbed together and described as superposition of states. This does not create a bizarre state of “un-dead” cat - the living and the dead cat mixed or smeared out in equal parts - at all other times. As has already been pointed out, the normal time evolution of the cat and the effect of its exposure to poisonous gas are two different unrelated aspects. The state at “here-now” is a culmination of the earlier states that are time evolution of the object. This is true “wave function collapse”, where the unknown collapses to become transitorily known (since the object continues to evolve in time). The collapse does not bring the object to a fixed state at ever-after. It describes the state only at “here-now”.
How much one quantity is changing in response to changes in some other quantity is called its derivative. Contrary to general perception, derivative is a constant differential over a subinterval, not a diminishing differential as one approaches zero. There cannot be any approach to zero in calculus because then there will be no change – hence no derivative. The interval of the derivative is a real interval. In any particular problem, one can find the time that passes during the subinterval of the derivative. Thus, nothing in calculus is instantaneous.
Derivatives are of two types. Geometrical derivatives presuppose that the function is continuous. At points of discontinuity, a function does not have a derivative. Physical derivatives are always discrete. Since numbers are always discrete quantities, a continuous function cannot represent numbers universally. While fields and charges are continuous, particles and mass are discrete. The differentiating characteristic between these two is dimension. Dimension is the characteristic of objects by which we differentiate the “inner space” of an object from its “outer space”. In the case of mass, it is discreet and relatively stable. In the case of fluids, it is continuous but unstable. Thus, the term derivative has to be used carefully. We will discuss its limitations by using some physical phenomena. We will deal with dimension, gravity and singularity cursorily and spin and entanglement separately. Here we focus on bare mass and bare charge that will also explain black holes, dark matter and dark energy. We will also explain “what is an electron” and review Coulomb’s law.
Even the modern mathematician and physicists do not agree on many concepts. Mathematicians insist that zero has existence, but no dimension, whereas the physicists insist that since the minimum possible length is the Planck scale; the concept of zero has vanished! The Lie algebra corresponding to SU (n) is a real and not a complex Lie algebra. The physicists introduce the imaginary unit i, to make it complex. This is different from the convention of the mathematicians. Often the physicists apply the “brute force approach”, in which many parameters are arbitrarily reduced to zero or unity to get the desired result. One example is the mathematics for solving the equations for the libration points. But such arbitrary reduction changes the nature of the system under examination (The modern values are slightly different from our computation). This aspect is overlooked by the physicists. We can cite many such instances, where the conventions of mathematicians are different from those of physicists. The famous Cambridge coconut puzzle is a clear representation of the differences between physics and mathematics. Yet, the physicists insist that unless a theory is presented in a mathematical form, they will not even look at it. We do not accept that the laws of physics break down at singularity. At singularity only the rules of the game change and the mathematics of infinities takes over.
The mathematics for a multi-body system like a lithium or higher atom is done by treating the atom as a number of two body systems. Similarly, the Schrödinger equation in so-called one dimension (it is a second order equation as it contains a term x2, which is in two dimensions and mathematically implies area) is converted to three dimensional by addition of two similar factors for y and z axis. Three dimensions mathematically imply volume. Addition of three (two dimensional) areas does not generate (three dimensional) volume and x2+y2+z2 ≠ (x.y.z). Similarly, mathematically all operations involving infinity are void. Hence renormalization is not mathematical. Thus, the so called mathematics of modern physicists is not mathematical at all!
Unlike Quantum physicists, we will not use complex terminology and undefined terms; will not first write everything as integrals and/or partial derivatives. We will not use Hamiltonians, covariant four-vectors and contravariant tensors of the second rank, Hermitian operators, Hilbert spaces, spinors, Lagrangians, various forms of matrices, action, gauge fields, complex operators, Calabi-Yau shapes, 3-branes, orbi-folding and so on to make it incomprehensible. We will not use “advanced mathematics”, such as the Abelian, non-Abelian, and Affine models etc, based on mere imagery at the axiomatic level. We will describe physics as it is perceived. We will use mathematics only to determine “how much” a system changes when some input parameters are changed and then explain the changed output, as it is perceived.
Lorentz force law deals with what happens when charges are in motion. This is a standard law with wide applications including designing TV Picture Tubes. Thus, its authenticity is beyond doubt. When parallel currents are run next to one another, they are attracted when the currents run in the same direction and repulsed when the currents run in opposite directions. The attractive or repulsive force is proportional to the currents and points in a direction perpendicular to the velocity. Observations and measurements demonstrate that there is an additional field that acts only on moving charges. This force is called the Lorentz force. This happens even when the wires are completely charge neutral. If we put a stationary test charge near the wires, it feels no force.
Consider a long wire that carries a current I and generates a corresponding magnetic field. Suppose that a charge moves parallel to this wire with velocity ~v. The magnetic field of the wire leads to an attractive force between the charge and the wire. With reference to the wire frame, there is no contradiction. But the problem arises when we apply the first postulate of Special Relativity that laws of physics are the same for all frames of reference. With reference to the charge-frame, the charge is stationary. Hence there cannot be any magnetic force. Further, a charged particle can gain (or lose) energy from an electric field, but not from a magnetic field. This is because the magnetic force is always perpendicular to the particle’s direction of motion. Hence, it does no work on the particle. (For this reason, in particle accelerators, magnetic fields are often used to guide particle motion e.g., in a circle, but the actual acceleration is performed by the electric fields.) Apparently, the only solution to the above contradiction is to assume some attractive force in the charge frame. The only attractive force in the charge frame must be an attractive electric field. In other words, apparently, a force is generated by the charge on itself while moving, i.e., back reaction, so that the total force on the charge is the back reaction and the applied force.
There is something fundamentally wrong in the above description. A charge must move in a medium. No one has ever seen the evidence for “bare charge” just like no one has ever seen the evidence for “bare mass”. Thus, “a charge moves parallel to this wire” must mean either that a charged body is passing by the wire or an electric current is flowing at a particular rate. In both cases, it would generate a magnetic field. Thus, the law of physics in both frames of reference is the same. Only the wrong assumption of the charge as stationary with reference to itself brings in the consequentially wrong conclusion of back reaction. This denormalization was sought to be renormalized.
Classical physics gives simple rules for calculating this force. An electron at rest is surrounded by an electrostatic field, whose value at a distance r is given by:
ε (r) = (e2/r). …………………………………………………………………(1)
If we consider a cell of unit volume at a distance r, the energy content of the cell is: (1/8π)ε2(r). (2)
The total electrostatic energy E is therefore obtained by integrating this energy over the whole of space. This raises the question about the range of integration. Since electromagnetic forces are involved, the upper limit is taken as infinity. The lower limit could depend upon the size of the electron. When Mr. Lorentz developed his theory of the electron, he assumed the electron to be a sphere of radius a. With this assumption, he arrived at:
E = e2/2a. …………………………………………………………………… (3)
The trouble started when attempts were made to calculate this energy from first principles. When a, the radius of the electron, approaches zero for a point charge, the denominator in equation (3) becomes zero implying total energy diverges to infinity:
As Mr. Feynman puts it; “What’s wrong with an infinite energy? If the energy can’t get out, but must stay there forever, is there any real difficulty with an infinite energy? Of course, a quantity that comes out as infinite may be annoying, but what matters is only whether there are any observable physical effects. To answer this question, we must turn to something else besides the energy. Suppose we ask how the energy changes when we move the charge. Then, if the changes are infinite, we will be in trouble”.
Electrodynamics suggests that mass is the effect of charged particles moving, though there can be other possible sources of origin of mass. We can take mass broadly of two types: mechanical or bare mass that we denote as m0 and mass of electromagnetic origin that we denote as mem. Total mass is a combination of both. In the case of electron, we have a mass experimentally observed, which must be equal to:
mexp = m0 + mem, …………………………………………………………….(5)
i.e., experimental mass = bare mass + electromagnetic mass.
This raises the question, what is mass? We will explain this and the mechanism of generation of mass without Higgs mechanism separately. For the present, it would suffice to note that the implication of equation (1) can be understood only through a confined field. The density of a confined field varies inversely with radius or diameter. If this density is affected at one point, the effect travels all along the field to affect other particles within the field. This is the only way to explain the seemingly action at a distance. The interaction of the field is fully mechanical. Though this fact is generally accepted, there is a tendency among scientists to treat the field as not a kind of matter and treat all discussions about the nature of the field as philosophical or meta-physical. For the present, we posit that mass is “field confined (which increases density beyond a threshold limit)”. Energy is “mass unleashed”. We will prove it later. Now, let us consider a paradox! The nucleus of an atom, where most of its mass is concentrated, consists of neutrons and protons. Since the neutron is thought of as a particle without any charge, its mass should be purely mechanical or bare mass. The mass of the charged proton should consist of m0 + mem. Hence, the mass of proton should have been higher than that of neutron, which, actually, is the opposite. We will explain this apparent contradiction later.
When the electron is moved with a uniform velocity v, the electric field generated by the electron’s motion acquires a momentum, i.e., mass x velocity. It would appear that the electromagnetic field acts as if the electron had a mass purely of electromagnetic origin. Calculations show that this mass mem is given by the equation:
where a defines the radius of the electron.
Again we land in problem, because if we treat a = 0, then equation (6) tells us that mem = ∞. ……………………………………………………………………….(8)
Further, if we treat the bare mass of electron m0 = 0 for a point particle, then the mass is of purely electromagnetic in origin. In that case:
mem = mexp = observed mass = 9.10938188 × 10-31 kilograms.………….…... (9),
which contradicts equation (8).
Putting the value of eq.9 in eq.7, we get: a = (2/3) (e2/ mexp c2)..….… (10),
as the radius of the electron. But we know that the classical electron radius:
The factor 2/3 in a depends on how the electric charge is actually distributed in the sphere of radius a. We will discuss it later. The r0 is the nominal radius. According to the modern quantum mechanical understanding of the hydrogen atom, the average distance between electron and proton is ≈1.5a0, somewhat different than the value in the Bohr model (≈ a0), but certainly the same order of magnitude. The value 1.5a0 is approximate, not exact, because it neglects reduced mass, fine structure effects (such as relativistic corrections), and other such small effects.
If the electron is a charged sphere, since it contains same charge, normally it should explode. However, if it is a point charge where a = 0, it will not explode – since zero has existence but no dimensions. Thus, if we treat the radius of electron as non-zero, we land at instability. If we treat the radius of electron as zero, we land at “division of a number by zero”. It is treated as infinity. Hence equation (6) shows the mem as infinity, which contradicts equation (8), which has been physically verified. Further, due to the mass-energy equation E = m0c2, mass is associated with an energy. This energy is known as self-energy. If mass diverges, self-energy also diverges. For infinite mass, the self-energy also becomes infinite. This problem has not been satisfactorily solved till date. According to standard quantum mechanics, if E is the energy of a free particle, its wave-function changes in time as:
Ψ (t) = e-iEt / ħ Ψ(0)…………………………………………………………… (12)
Thus, effectively, time evolution adds a phase factor e-iEt / ħ. Thus, the “dressing up” only changes the value of E to (E+ ΔE). Hence, it can be said that as the mass of the particle changes from m0, the value appropriate to a bare particle, to (m0 + Δm), the value appropriate to the dressed up or physically observable “isolated” or “free” particle changes from E to (E+ ΔE). Now, the value of (m0 + Δm), which is the observed mass, is known to be 9.10938188 × 10-31 kilograms. But Δm, which is same as mem = ∞. Hence again we are stuck with an infinity.
Mr. Tomonaga, Mr. Schwinger and Mr. Feynman independently tried to solve the problem. They argued that what we experimentally observe is the bare electron, which cannot be directly observed, because it is always interacting with its own field. In other words, they said that experimental results must be wrong because something, which cannot be experimentally verified, is changing it! And only after something else is subtracted from the experimental results, it would give the correct figures! It must be magic or voodoo! There is no experimental proof till date to justify the inertial increase of mass. Energy does affect volume which affects density, but it does not affect mass. Further, they have not defined “what is an electron”. Hence they can assign any property to it as long as the figures match. This gives them lot of liberty to play with the experimental value to match their “theories”. What they say effectively means: if one measures a quantity and gets the result as x, it must be the wrong answer. The correct answer should be x’ – Δx, so that the result is x. Since we cannot experimentally observe Δx, we cannot get x’. But that is irrelevant. You must believe that what the scientists say is the only truth. And they get Nobel Prize for that “theory”!
It is this hypothetical interaction Δm that “dresses up” the electron by radiative corrections to de-normalize it. Thereafter, they started the “mathematical” magic of renormalization. Since Δm was supposed to be ∞, they tried to “nullify” or “kill” the infinity by using a counter term. They began with the hydrogen atom. They assumed the mass of the electron as m0 + Δm and switched on both coulombic and radiative interactions. However, the Hamiltonian for the interaction was written not as Hi, but HiΔm. Thereafter, they cancelled + Δm by – Δm. This operation is mathematically not legitimate, as in mathematics, all operations involving infinity are void. Apart from the wrong assumptions, the whole problem has arisen primarily because of the mathematics involving division by zero, which has been assumed to be infinite. Hence let us examine this closely. First the traditional view.
Division of two numbers a and b is the reduction of dividend a by the divisor b or taking the ratio a/b to get the result (quotient). Cutting or separating an object into two or more parts is also called division. It is the inverse operation of multiplication. If: a x b = c, then a can be recovered as a = c/b as long as b ≠ 0. Division by zero is the operation of taking the quotient of any number c and 0, i.e., c/0. The uniqueness of division breaks down when dividing by b = 0, since the product a x 0 = 0 is the same for any value of a. Hence a cannot be recovered by inverting the process of multiplication (a = c/b). Zero is the only number with this property and, as a result, division by zero is undefined for real numbers and can produce a fatal condition called a “division by zero error” in computer programs. Even in fields other than the real numbers, division by zero is never allowed.
Now let us evaluate (1+1/n)n for any number n. As n increases, 1/n reduces. For very large values of n, 1/n becomes almost negligible. Thus, for all practical purposes, (1+1/n) = 1. Since any power of 1 is also 1, the result is unchanged for any value of n. This position holds when n is very small and is negligible. Because in that case we can treat it as zero and any number raised to the power of zero is unity. There is a fatal flaw in this argument, because n may approach ∞ or 0, but it never “becomes” ∞ or 0.
On the other hand, whatever be the value of 1/n, it will always be more than zero, even for large values of n. Hence, (1+1/n) will always be greater than 1. When a number greater than zero is raised to increasing powers, the result becomes larger and larger. Since (1+1/n) will always be greater than 1, for very large values of n, the result of (1+1/n)n will also be ever bigger. But what happens when n is very small and comparable to zero? This leads to the problem of “division by zero”. The contradicting result shown above was sought to be resolved by the concept of limit, which is at the heart of calculus. The generally accepted concept of limit led to the result: as n approaches 0, 1/n approaches ∞. Since that created all problems, let us examine this aspect closely.
In Europe, the concept of limit goes back to Mr. Archimedes. His method was to inscribe a number of regular polygons inside a circle. In a regular polygon, all sides are equal in length and each angle is equal with the adjacent angles. If the polygon is inscribed in the circle, its area will be less than the circle. However, as the number of sides in a polygon increases, its area approaches the area of the circle. Similarly by circumscribing the polygon over the circle, as the number of its sides goes up, its circumference and area would be approaching those of the circle. Hence, the value of p can be easily found out by dividing the circumference with the diameter. If we take polygons of increasingly higher sides and repeat the process, the true value of p can be “squeezed” between a lower and an upper boundary. His value for p was within limits of:
Long before Mr. Archimedes, the idea was known in India and was used in the Shulba Sootras, world’s first mathematical works. For example, one of the formulae prevalent in ancient India for determining the length of each side of a polygon with 3,4,…9 sides inscribed inside a circle was as follows: Multiply the diameter of the circle by 103923, 84853, 70534, 60000, 52055, 45922, 41031, for polygons having 3 to 9 sides respectively. Divide the products by 120000. The result is the length of each side of the polygon. This formula can be extended further to any number of sides of the polygon.
Aachaarya Brahmagupta (591 AD) solved indeterminate equations of the second order in his books “Brahmasphoota Siddhaanta”, which came to be known in Europe as Pell’s equations after about 1000 years. His lemmas to the above solution were rediscovered by Mr. Euler (1764 AD), and Mr. Lagrange (1768 AD). He enunciated a formula for the rational cyclic quadrilateral. Chhandas is a Vedic metric system, which was methodically discussed first by Aachaarya Pingala Naaga of antiquity. His work was developed by subsequent generations, particularly, Aachaarya Halaayudha during the 10th Century AD. Using chhandas, Aachaarya Halaayudha postulated a triangular array for determining the type of combinations of n syllables of long and short sounds for metrical chanting called Chityuttara. He developed it mathematically into a pyramidal expansion of numbers. The ancient treatise on medicine – Kashyapa Samhita uses Chityuttarafor classifying chemical compositions and diseases and used it for treatment. Much later, it appeared in Europe as the Pascal’s triangle. Based on this, (1+1/n)n has been evaluated as the limit:
e = 2.71828182845904523536028747135266249775724709369995….
Aachaarya Bhaaskaraachaarya – II (1114 AD), in his algebraic treatise “Veeja Ganitam”, had used the “chakravaala” (cyclic) method for solving the indeterminate equations of the second order, which has been hailed by the German mathematician Mr. Henkel as “the finest thing achieved in the theory of numbers before Lagrange”. He used basic calculus based on “Aasannamoola” (limit), “chityuttara” (matrix) and “circling the square” methods several hundreds of years before Mr. Newton and Mr. Leibniz. “Aasannamoola” literally means “approaching a limit” and has been used in India since antiquity. Surya Siddhanta, Mahaa Siddhanta and other ancient treatises on astronomy used this principle. The later work, as appears from internal evidence, was written around 3100 BC. However, there is a fundamental difference between these methods and the method later adopted in Europe. The concepts of limit and calculus have been tested for their accuracy and must be valid. But while the Indian mathematicians held that they have limited application in physics, the Europeans held that they are universally applicable. We will discuss this elaborately.
Both Mr. Newton and Mr. Leibniz evolved calculus from charts prepared from the power series, based on the binomial expansion. The binomial expansion is supposed to be an infinite series expansion of a complex differential that approached zero. But this involved the problems of the tangent to the curve and the area of the quadrature. In Lemma VII in Principia, Mr. Newton states that at the limit (when the interval between two points goes to zero), the arc, the chord and the tangent are all equal. But if this is true, then both his diagonal and the versine must be zero. In that case, he is talking about a point with no spatial dimensions. In case it is a line, then they are all equal. In that case, neither the versine equation nor the Pythagorean Theorem applies. Hence it cannot be used in calculus for summing up an area with spatial dimensions.
Mr. Newton and Mr. Leibniz found the solution to the calculus while studying the “chityuttara” principle or the so-called Pascal’s differential triangle. To solve the problem of the tangent, this triangle must be made smaller and smaller. We must move from x to Δx. But can it be mathematically represented? No point on any possible graph can stand for a point in space or an instant in time. A point on a graph stands for two distances from the origin on the two axes. To graph a straight line in space, only one axis is needed. For a point in space, zero axes are needed. Either you perceive it directly without reference to any origin or it is non-existent. Only during measurement, some reference is needed.
While number is a universal property of all substances, there is a difference between its application to objects and quantities. Number is related to the object proper that exist as a class or an element of a set in a permanent manner, i.e., at not only “here-now”, but also at other times. Quantity is related to the objects only during measurement at “here-now” and is liable to change from time to time. For example, protons and electrons as separate classes can be assigned class numbers 1 and 2 or any other permanent class number. But their quantity, i.e., the number of protons or electrons as seen during measurement of a sample, can change. The difference between these two categories is a temporal one. While the description “class” is time invariant, the description quantity is time variant, because it can only be measured at “here-now” and may subsequently change. The class does not change. This is important for defining zero, as zero is related to quantity, i.e., the absence of a class of substances that was perceived by us earlier (otherwise we would not perceive its absence), but does not exist at “here-now”. It is not a very small quantity, because even then the infinitely small quantity is present at here-now. Thus, the expression: limn → ∞1/n = 0 does not mean that 1/n will ever be equal to zero.
Infinity, like one, is without similars. But while the dimensions of “one” are fully perceptible; those for infinity are not perceptible. Thus, space and time, which are perceived as without similars, but whose dimensions cannot be measured fully, are infinite. Infinity is not a very big number. We use arbitrary segments of it that are fully perceptible and label it differently for our purpose. Ever-changing processes can’t be measured other than in time – their time evolution. Since we observe the state and not the process of change during measurement (which is instantaneous), objects under ideal conditions are as they evolve independent of being perceived. What we measure reflects only a temporal state of their evolution. Since these are similar for all perceptions of objects and events, we can do mathematics with it. The same concept is applicable to space also. A single object in void cannot be perceived, as it requires at least a different backdrop and an observer to perceive it. Space provides the backdrop to describe the changing interval between objects. In outer space, we do not see colors. It is either darkness or the luminous bodies – black or white. The rest about space are like time.
There are functions like an = (2n +1) / (3n + 4), which hover around values that are close to 2/3 for all values of n. Even though objects are always discrete, it is not necessary that this discreteness must be perceived after direct measurement. If we measure a sample and infer the total quantity from such direct measurement, the result can be perceived equally precisely and it is a valid method of measurement – though within the constraints of the mechanism for precision measurement. However, since physical particles are always discrete, the indeterminacy is terminated at a desired accuracy level that is perceptible. This is the concept behind “Aasannamoola” or digital limit. Thus, the value of π is accepted as 3.141...Similarly, the ratio between the circumference and diameter of astral bodies, which are spheroids, is taken as √10 or 3.16....We have discussed these in our book “Vaidic Theory of Number”. This also conforms to the modern definition of function, according to which, every x plugged into the equation will yield exactly one y out of the equation – a discrete quantity. This also conforms to the physical Hamiltonian, which is basically a function, hence discrete.
Now, let us take a different example: an = (2n2 +1) / (3n + 4). Here n2 represents a two dimensional object, which represents area or a graph. Areas or graphs are nothing but a set of continuous points in two dimensions. Thus, it is a field that vary smoothly without breaks or jumps and cannot propagate in true vacuum. Unlike a particle, it is not discrete, but continuous. For n = 1,2,3,…., the value of an diverges as 3/7, 9/10, 19/13, ...... For every value of n, the value for n+1 grows bigger than the earlier rate of divergence. This is because the term n2 in the numerator grows at a faster rate than the denominator. This is not done in physical accumulation or reduction. In division, the quotient always increases or decreases at a fixed rate in proportion to the changes in either the dividend or the divisor or both.
For example, 40/5 = 8 and 40/4 = 10. The ratio of change of the quotient from 8 to 10 is the same as the inverse of the ratio of change of the divisor from 5 to 4. But in the case of our example: an = (2n2 +1) / (3n + 4), the ratio of change from n = 2 to n = 3 is from 9/10 to 19/13, which is different from 2/3 or 3/2. Thus, the statement:
limn→∞ an = {(2n2 +1) / (3n + 4)} → ∞,
is neither mathematically correct (as the values for n+1 is always greater than that of n and never a fixed ratio n/n+1) nor can it be applied to discrete particles (since it is indeterminate). According to relativity, wherever speed comparable to light is involved, like that of a free electron or photon, the Lorentz factors invariably comes in to limit the output. There is always length, mass or time correction. But there is no such correcting or limiting factor in the above example. Thus, the present concept of limit violates the principle of relativistic invariance for high velocities and cannot be used in physics.
All measurements are done at “here-now”. The state at “here-now” is frozen for future reference as the result of measurement. All other unknown states are combined together as the superposition of states. Since zero represents a class of object that is non-existent at “here-now”, it cannot be used in mathematics except by way of multiplication (explained below). Similarly, infinity goes beyond “here-now”. Hence it can’t be used like other numbers. These violate superposition principle as measurement is sought to be done with something non-existent at “here-now”. For this reason, Indian mathematicians treated division by zero in geometry differently from that in physics.
Aachaarya Bhaaskaraachaarya (1114 AD) followed the geometrical method and termed the result of division by zero as “khahara”, which is broadly the same as renormalization except for the fact that he has considered non-linear multiplication and division only, whereas renormalization considers linear addition and subtraction by the counter term. He visualized it as something of a class that is taken out completely from the field under consideration. However, even he had described that if a number is first divided and then multiplied by zero, the number remains unchanged. Aachaarya Mahaavira (about 850 AD), who followed the physical method in his book “Ganita Saara Samgraha”, holds that a number multiplied by zero is zero and remains unchanged when it is divided by, combined with or diminished by zero. The justification for the same is as follows:
Numbers accumulate or reduce in two different ways. Linear accumulations and reductions are addition and subtraction. Non-linear accumulation and reduction are multiplication and division. Since mathematics is possible only between similars, in the case of non-linear accumulation and reduction, first only the similar part is accumulated or reduced. Then the mathematics is redone between the two parts. For example, two areas or volumes can only be linearly accumulated or reduced, but cannot be multiplied or divided. But areas or volumes can be multiplied or divided by a scalar quantity, i.e., a number. Suppose the length of a field is 5 meters and breadth 3 meters. Both these quantities are partially similar as they describe the same field. Yet, they are dissimilar as they describe different spreads of the same field. Hence we can multiply these. The area is 15 sqmts. If we multiply the field by 2, it means that either we are increasing the length or the breadth by a factor of two. The result 15 x 2 = 30 sqmts can be arrived at by first multiplying either 5 or 3 with 2 and then multiplying the result with the other quantity: (10 x 3 or 5 x 6). Of course, we can scale up or down both length and breadth. In that case, the linear accumulation has to be done twice separately before we multiply them.
Since zero does not exist at “here-now” where the numbers representing the objects are perceived, it does not affect addition or subtraction. During multiplication by zero, one non-linear component of the quantity is increased to zero, i.e., moves away from “here-now” to a superposition of states. Thus, the result becomes zero for the total component, as we cannot have a Schrödinger’s “undead” cat before measurement in real life. In division by zero, the “non-existent” part is sought to be reduced from the quantity (which is an operation akin to “collapse reversal” in quantum mechanics), leaving the quantity unchanged. Thus, physically, division by zero leaves the number unchanged.
This has important implications for many established concepts of physics. One example is the effect on mass, length and time of a body traveling at the velocity of light. According to the accepted view, these are contracted infinitely. Earlier we had shown the fallacies inherent in this view. According to the view of Aachaarya Mahaavira, there is no change in such cases. Thus, length and time contractions are not real but apparent. Hence treating it as real is bad mathematics. But its effect on point mass is most dramatic. We have shown in latter pages that all fermions (we call these asthanwaa – literally meaning something with a fixed structure) are three dimensional structures (we call these tryaanuka and the description tribrit) and all mesons (we call these anasthaa – literally meaning something without a fixed structure) are two dimensional structures (we call the description atri – literally meaning not three). Both of these are confined particles (we call these dwaanuka – literally meaning “coupled point masses” and the description agasti - literally meaning created in confinement). We treat the different energy that operate locally and fall off with distance as sub-fields (we call these jaala – literally a net) in the universal field. This agrees with Mr. Kennard’s formulation of uncertainty relation discussed earlier. By definition, a point has no dimension. Hence each point in space cannot be discerned from any other. Thus, a point-mass (we call it anu) is not perceptible. The mass has been reduced to one dimension making it effectively mass-less. Since after confinement in higher dimensions, it leads to generation of massive structures, it is not mass-less either.
When Mr. Fermi wrote the three part Hamiltonian: H = HA + HR + HI, where HA was the Hamiltonian for the atom, HR the Hamiltonian for radiation and HI the Hamiltonian for interaction, he was somewhat right. He should have written H was the Hamiltonian for the atom and HA was the Hamiltonian for the nucleus. We call these three (HA, HR, HI) as “Vaya”, “Vayuna” and “Vayonaadha” respectively. Of these, the first has fixed dimension (we call it akhanda), the second both fixed and variable dimensions depending upon its nature of interaction (we call it khandaakhanda) and the third variable dimensions (we call it sakhanda). The third represents energy that “binds” the other two. This can be verified by analyzing the physics of sand dunes. Many experiments have been conducted on this subject in the recent past. The water binds the sand in ideal conditions when the ratio between them is 1:8. More on this has been discussed separately. Different forces cannot be linearly additive but can only co-exist. Since the three parts of the Hamiltonians do not belong to the same class, they can only coexist, but cannot accumulate or reduce through interchange.
When Mr. Dirac wrote HI as HIΔm, so that Δm, which was thought to be infinite could be cancelled by –Δm, he was clearly wrong. There is no experimental proof till date to justify the inertial increase of mass. It is only a postulate that has been accepted by generations since Mr. Lorentz. Addition of energy in some cases may lead to a change in dimension with consequential change in density. Volume and density are inversely proportional. Change in one does lead to change in the other, which is an operational aspect. But it does not change the mass, which is related to existential aspect. Mr. Feynman got his Nobel Prize for renormalizing the so-called bare mass. As has been shown later, it is one of the innumerable errors committed by the Nobel Committee. The award was more for his stature and clever “mathematical” manipulation to match the observed values than for his experiment or verifiable theory.
A similar “mathematical” manipulation was done by Mr. Lev Landau, who developed a famous equation to find the so-called Landau Pole, which is the energy at which the force (the coupling constant) becomes infinite. Mr. Landau found this pole or limit or asymptote by subtracting the bare electric charge e from the renormalized or effective electric charge eR: 1/ eR2 - 1/e2 = (N/6π2)ln(Λ/ mR)
Here momentum has been represented by Λ instead of the normal “p” for unexplained reasons – may be to introduce incomprehensibility or assigning magical properties to it later. Treating the renormalized variable eR as constant, one can calculate where the bare charge becomes singular. Mr. Landau interpreted this to mean that the coupling constant had become infinite at that value. He called this energy the Landau pole.
In any given experiment, the electron shows one and only one charge value so that either e or eR must be incorrect. Thus, either the original mathematical value e or the renormalized mathematical value eR must be wrong. If two values are different, both cannot be used as correct in the same equation. Thus what Mr. Landau does effectively is: add or subtract an incorrect value from a correct value, to achieve “real physical information”! And he got his Nobel Prize for this achievement! In the late 1990’s, there was a well-known “Landau pole problem” that was discussed in several journals. In one of them, the physicists claimed that: “A detailed study of the relation between bare and renormalized quantities reveals that the Landau pole lies in a region of parameter space which is made inaccessible by spontaneous chiral symmetry breaking”. We are not discussing it.
Some may argue that the effective charge and the bare charge are both experimental values: the effective charge being charge as experienced from some distance and the bare charge being the charge experienced on the point particle. In a way, the bare charge comes from 19th century experiments and the effective charge comes from 20th century experiments with the changing notion of field. This is the current interpretation, but it is factually incorrect. The difference must tell us something about the field. But there is no such indication. According to the present theory, the bare charge on the electron must contain a negative infinite term, just as the bare mass of the electron has an infinite term. To get a usable figure, both have to be renormalized. Only if we hold that the division by zero leaves the number unchanged, then the infinities vanish without renormalization and the problem can be easily solved.
Interaction is the effect of energy on mass and it is always not the same as mass or its increase/decrease by a fixed rule. This can be proved by examining the mass of quarks. Since in the quark model the proton has three quarks, the masses of the “Up” and “Down” quarks were thought to be about ⅓ the mass of a proton. But this view has since been discarded. The quoted masses of quarks are now model dependent, and the mass of the bottom quark is quoted for two different models. In other combinations they contribute different masses. In the pion, an “up” and an “anti-down” quark yield a particle of only 139.6 MeV of mass energy, while in the rho vector meson, the same combination of quarks has a mass energy of 770 MeV. The difference between a pion and a rho is the spin alignment of the quarks. We will show separately that these spin arrangements arise out of different bonding within the confinement. The pion is a pseudo-scalar meson with zero angular momentum. The values for these masses have been obtained by dividing the observed energy by c2. Thus, it is evident that different spin alignment in the “inner space” of the particle generates different pressure on the “outer space” of the particle, which is expressed as different mass.
When a particle is reduced to point mass, it loses its confinement, as confinement implies dimension and a point has no dimension. Thus, it becomes not only indiscernible, but also becomes one with the universal field implied in Mr. Kennard’s formulation that has been validated repeatedly. Only this way the “virtual interactions” are possible. Mr. Einstein’s ether-less relativity is not supported by Mr. Maxwell’s Equations nor the Lorentz Transformations, both of which are medium (aether) based. We will discuss it elaborately later. Any number, including and above one, requires extension (1 from 0 and n from n-1). Since points by definition cannot have extensions, number and point must be mutually exclusive. Thus, the point mass behaves like a part of the field. Photon is one such example. It is not light quanta – as that would make it mechanical, which would require it to have mass and diameter. Light is not “the appearance of photon”, but “momentary uncovering of the universal field due to the movement of energy through it”. Hence it is never stationary and varies with density of the medium. There have been recent reports of bringing light to stop. But the phenomenon has other explanations. Reduction of mass to this stage has been described as “khahara” by Aachaarya Bhaaskaraachaarya and others. The reverse process restores mass to its original confined value. Hence if a number is first divided and then multiplied by zero, the number remains unchanged.
This shows the role of dimension and also proves that mass is confined field and charge is mass unleashed. This also explains why neutron is heavier than the proton. According to our calculation, neutron has a net negative charge of –1/11, which means, it contains +10/11 (proton) and -1 (electron) charge. It searches out for a complementary charge for attaining equilibrium. Since negative charge confines the center of mass; the neutron generates pressure on a larger area on the outer space of the atom than the confined proton. This is revealed as the higher mass. Thus, the very concept of a fixed Δm to cancel an equivalent –Δm is erroneous.
Viewed from the above aspect, the “mass gap” and the Yang-Mill’s theory to describe the strong interactions of elementary particles need to be reviewed. We have briefly discussed it in later pages. Since massive particles have dimensions, and interactions with other particles are possible only after the dimensions are broken through, let us examine dimension.
It can be generally said that the electrons determine atomic size, i.e., its dimensions. There are different types of atomic radii: such as Van der Waal’s radius, ionic radius, covalent radius, metallic radius and Bohr radius etc. Bohr radius is the radius of the lowest-energy electron orbit predicted by Bohr model of the atom in 1913. It defines the dimensional boundary of single electron atoms such as hydrogen. Although the model itself is now treated as obsolete, the Bohr radius for the hydrogen atom is still regarded as an important physical constant. Unless this radius is overtaken (dimensional boundary is broken), no other atoms, molecules or compounds can be formed, i.e., the atom cannot take part in any chemical interaction. Thus, Mr. Bohr’s equations are valid only for the hydrogen atom and not for higher atoms.
Most of quantum physics dealing with extra large or compact dimensions have not defined dimension precisely. In fact in most cases, like in the description of phase-space-portrait, the term dimension has been used for vector quantities in exchange for direction. Similarly; the M theory, which requires 11 undefined dimensions, defines strings as one dimensional loop. Dimension is the differential perception of the “inner space” of an object (we call it aayatana) from its “outer space”. In a helium atom with two protons, the electron orbit determines this boundary. In a hydrogen molecule with two similar protons, the individual inner spaces are partially shared. When the relation between the “inner space” of an object remain fixed for all “outer space”, i.e., irrespective of orientation, the object is called a particle with characteristic discreteness. In other cases, it behaves like a field with characteristic continuity.
For perception of the spread of the object, the electromagnetic radiation emitted by the object must interact with that of our eyes. Since electric and magnetic fields move perpendicular to each other and both are perpendicular to the direction of motion, we can perceive the spread of any object only in these three directions. Measuring the spread uniquely is essentially measuring the invariant space occupied by any two points on it. This measurement can be done only with reference to some external frame of reference. For the above reason, we arbitrarily choose a point that we call origin and use axes that are perpendicular to each other (analogous to e.m. waves) and term these as x-y-z coordinates (length-breadth-height making it 3 dimensions or right-left, forward-backward and up-down making it 6 dimensions). Mathematically a point has zero dimensions. A straight line has one dimension. An area has two dimensions and volume has three dimensions. A one dimensional loop is mathematically impossible, as a loop implies curvature, which requires a minimum of two dimensions. Thus, the “mathematics” of string theory, which requires 10, 11 or 26 compactified or extra-large or time dimensions, violates all mathematical principles.
Let us now consider the “physics” of string theory. It was developed with a view to harmonize General Relativity with Quantum theory. It is said to be a high order theory where other models, such as super-gravity and quantum gravity appear as approximations. Unlike super-gravity, string theory is said to be a consistent and well-defined theory of quantum gravity, and therefore calculating the value of the cosmological constant from it should, at least in principle, be possible. On the other hand, the number of vacuum states associated with it seems to be quite large, and none of these features three large spatial dimensions, broken super-symmetry, and a small cosmological constant. The features of string theory which are at least potentially testable - such as the existence of super-symmetry and cosmic strings - are not specific to string theory. In addition, the features that are specific to string theory - the existence of strings - either do not lead to precise predictions or lead to predictions that are impossible to test with current levels of technology.
There are many unexplained questions relating to the strings. For example, given the measurement problem of quantum mechanics, what happens when a string is measured? Does the uncertainty principle apply to the whole string? Or does it apply only to some section of the string being measured? Does string theory modify the uncertainty principle? If we measure its position, do we get only the average position of the string? If the position of a string is measured with arbitrarily high accuracy, what happens to the momentum of the string? Does the momentum become undefined as opposed to simply unknown? What about the location of an end-point? If the measurement returns an end-point, then which end-point? Does the measurement return the position of some point along the string? (The string is said to be a Two dimensional object extended in space. Hence its position cannot be described by a finite set of numbers and thus, cannot be described by a finite set of measurements.) How do the Bell’s inequalities apply to string theory? We must get answers to these questions first before we probe more and spend (waste!) more money in such research. These questions should not be put under the carpet as inconvenient or on the ground that some day we will find the answers. That someday has been a very long period indeed!
The point, line, plane, etc. have no physical existence, as they do not have physical extensions. As we have already described, a point vanishes in all directions. A line vanishes along y and z axes. A plane vanishes along z axis. Since we can perceive only three dimensional objects, an object that vanishes partially or completely cannot be perceived. Thus, the equations describing these “mathematical structures” are unphysical and cannot explain physics by themselves. A cube drawn (or marked on a three dimensional) paper is not the same as a cubic object. Only when they represent some specific aspects of an object, do they have any meaning. Thus, the description that the two-dimensional string is like a bicycle tyre and the three-dimensional object is like a doughnut, etc, and that the Type IIA coupling constant allows strings to expand into two and three-dimensional objects, is nonsense.
This is all the more true for “vibrating” strings. Once it starts vibrating, it becomes at least two dimensional. A transverse wave will automatically push the string into a second dimension. It cannot vibrate length-wise, because then the vibration will not be discernible. Further, no pulse could travel lengthwise in a string that is not divisible. There has to be some sort of longitudinal variation to propose compression and rarefaction; but this variation is not possible without subdivision. To vibrate in the right way for the string theory, they must be strung very, very, tight. But why are the strings vibrating? Why are some strings vibrating one way and others vibrating in a different way? What is the mechanism? Different vibrations should have different mechanical causes. What causes the tension? No answers! One must blindly accept these “theories”. And we thought blind acceptance is superstition!
Strings are not supposed to be made up of sub-particles; they are absolutely indivisible. Thus, they should be indiscernible and undifferentiated. Ultimate strings that are indivisible should act the same in the same circumstances. If they act differently, then the circumstances must differ. But nothing has been told about these different circumstances. The vast variation in behavior is just another postulate. How the everyday macroscopic world emerges from its strangely behaving microscopic constituents is yet to be explained by quantum physics. One of the major problems here is the blind acceptance of the existence of 10 or 11 or 26 dimensions and search for ways to physically explain those non-existing dimensions. And that is science!
The extra-dimension hypothesis started with a nineteenth century novel that described “flat land”, a two dimensional world. In 1919, Mr. Kaluza proposed a fourth spatial dimension and linked it to relativity. It allowed the expression of both the gravitational field and the electromagnetic field - the only two of the major four that were known at the time. Using the vector fields as they have been defined since the end of the 19th century, the four-vector field could contain only one acceleration. If one tried to express two acceleration fields simultaneously, one got too many (often implicit) time variables showing up in denominators and the equations started imploding. The calculus, as it has been used historically, could not flatten out all the accelerations fast enough for the mathematics to make any sense. What Mr. Kaluza did was to push the time variable out of the denominator and switch it into another x variable in the numerator. Minkowski’s new “mathematics” allowed him to do so. He termed the extra x-variable as the fourth spatial dimension, without defining the term. It came as a big relief to Mr. Einstein, who was struggling not only to establish the “novelty” of his theory over the “mathematics” of Mr. Poincare, who discovered the equation e = mc2 five years before him, but also to include gravity in SR. Since then, the fantasy has grown bigger and bigger. But like all fantasies, the extra-dimensions could not be proved in any experiment.
Some people have suggested the extra seven dimensions of M theory to be time dimensions. The basic concept behind these extra fields is rate of change concept of calculus. Speed is rate of change of displacement. Velocity is rate of change of speed. Acceleration is the rate of change of velocity. In all such cases, the equations can be written as Δx/Δt or ΔΔx, Δx/Δt2 or ΔΔΔx, etc. In all these cases, the time variable increases inversely with the space variable. Some suggested extending it further like Δx/Δt3 or ΔΔΔΔx and so on, i.e., rate of change of acceleration and rate of change of that change and so on. But in that case it can be extended ad infinitum implying infinite number of dimensions. Why stop only at 7? Further, we do not use any other terminology for rate of change of acceleration except calling it variable acceleration. Speed becomes velocity when direction is included in the description. Velocity becomes acceleration when change in the direction is included in the description. But then what next for the change into higher order?
Some try to explain this by giving the example of a speeding car with constant velocity, which brings in a term t2. Then they assume that the car along with the road is tucked inside a giant alien space craft, which moves in the same direction with a constant, but different velocity (this they interpret as acceleration), which brings in another term t2. Then they claim that the motion of the car relative to the earth or to space is now the compound of two separate accelerations, both of which are represented by t2. So the total acceleration would be constant, not variable, but it would be represented by t4. This is what they call a “variable acceleration” of higher order. But this is a wrong description. If we consider the motion of the space craft relative to us, then it is moving with a constant velocity. If we consider the car directly, then also it is moving at a different, but constant velocity from us in unit time represented by t or t2 and not t4, which is meaningless.
String theory and M-theory continued to pursue this method. They had two new fields to express. Hence they had (at least) two new variables to be transported into the numerators of their equations. Every time they inserted a new variable, they had to insert a new field. Since they inserted the field in the numerator as another x-variable, they assumed that it is another space field and termed it as an extra dimension. But it can be transported to the denominator as an inverse time variable also. Both these descriptions are wrong. Let us examine what a field is. A medium or a field is a substance or material which carries the wave. It is a region of space characterized by a physical property having a determinable value at every point in the region. This means that if we put something appropriate in a field, we can then notice “something else” out of that field, which makes the body interact with other objects put in that field in some specific ways, that can be measured or calculated. This “something else” is a type of force. Depending upon the nature of that force, the scientists categorize the field as gravity field, electric field, magnetic field, electromagnetic field, etc. The laws of modern physics suggest that fields represent more than the possibility of the forces being observed. They can also transmit energy and momentum. Light wave is a phenomenon that is completely defined by fields.
Now, let us take a physical example. Let us stand in a pool with static water with eyes closed. We do not feel the presence of water except for the temperature difference. Now we stand in a fountain of flowing water. We feel a force from one direction. This is the direction of the flow of water. This force is experienced differently depending upon the velocity of the flow. Water is continuously flowing out and is being replaced by other water. There is no vacuum. But we cannot distinguish between the different waters that flow down. We only feel the force. If the velocity of the flow is too small, we may not experience any force. Only when the velocity crosses a threshold limit do we experience the force. This principle is a universal principle. This is noticed in black-body radiation and was explained by the photo-electric effect. While the threshold limit remains constant for each system, the force that is experienced varies with a fixed formula. The threshold limit provides the many universal constants of Nature. We measure the changes in force only as ax, where “a” is constant and “x” the variable. If we classify all forces into one group x, then we will have only one universal constants of Nature. This way, there will be only one background field containing many energy subfields (we call these “jaala” literally meaning net) that behave like local density gradients. In that case, only the effect of the field gets locally modified. There is no need to add extra space variable in numerator or inverse time variable in denominator.
Let us look at speed. It is no different from velocity. Both speed and velocity are the effects of application of force. Speed is the displacement that arises when a force is applied to a body and where the change in the direction of the body or the force acting on it, is ignored. When we move from speed to velocity, the direction is imported into the description depending upon the direction from which the force is applied. This makes velocity a vector quantity. In Mr. Newton’s second law, f = ma, which is valid only for constant-mass systems, the term ‘f’ has not been qualified. Once an externally applied force acts on the body, the body is displaced. Thereafter, the force loses contact with the body and ceases to act on it. Assuming no other force is acting on the body, the body should move only due to inertia, which is constant. Thus, the body should move at constant velocity and the equation should be f = mv. Mr. Newton has not taken this factor into account.
The rate of change or f = ma arises because of application of additional force, which changes the direction of the velocity. The initial force may be applied instantaneously like the firing of a bullet or continuously like a train engine pulling the bogies. In both cases the bodies move with constant velocity due to inertia. Friction changes the speed (not directly the velocity, because it acts against the direction of motion not affecting direction), which, in the second case, is compensated by application of additional force of the engine. When velocity changes to acceleration, nothing new happens. It requires only application of additional force to change the constant velocity due to inertia. This additional force need not be of another kind. Thus, this is a new cycle of force and inertia changing the speed of the body. The nature of force and displacement is irrelevant for this description. Whether it is a horse-pulled car or steam engine, diesel engine, electric engine or rocket propelled body, the result is the same.
Now let us import time to the equations of this motion. Time is an independent variable. Motion is related to space, which is also an independent variable. Both co-exist, but being independent variables, they operate independent of each other. A body can be in the same position or move 10 meters or a light year in a nano-second or in a billion years. Here the space coordinates and time coordinates do not vary according to any fixed rules. They are operational descriptions and not existential descriptions. They can vary for the same body under different circumstances, but it does not directly affect the existence, physics or chemistry of the body or other bodies (it may affect due to wear and tear, but that is an operational matter). Acceleration is defined as velocity per time or displacement per time per time or time squared. This is written mathematically as t2. Squaring is possible only if there is non-linear accumulation (multiplication) of the same quantity. Non-linearity arises when the two quantities are represented by different coordinates, which also implies that they move along different directions. In the case of both velocity and acceleration, time moves in the same direction from past to present to future. Thus, the description “time squared” is neither a physical nor mathematical description. Hence acceleration is essentially no different from velocity or speed with a direction. While velocity shows speed in a fixed direction over a finite time segment (second, hour or year, etc), acceleration shows changes in direction of velocity over an equal time segment, which implies the existence of another force acting simultaneously that changes the velocity over the same time segment. Hence no time squaring! Only the forces get coupled.
Dimension is an existential description. Change in dimension changes the existential description of the body irrespective of time and space. It never remains the same thereafter. Since everything is in a state of motion with reference to everything else at different rates of displacement, these displacements could not be put into any universal equation. Any motion of a body can be described only with reference to another body. Poincare and other have shown that even three body equations cannot be solved precisely. Our everyday experience shows that the motion of a body with reference to other bodies can measure different distances over the same time interval and same distance over different time intervals. Hence any standard equation for motion including time variables for all bodies or a class of bodies is totally absurd. Photon and other radiation that travel at uniform velocity, are mass less or without a fixed back-ground structure – hence, strictly, are not “bodies” (we call these asthanwaa – literally meaning “boneless or without any fixed back-ground structure” and the massive bodies as asthimat – literally meaning “with bones or back-ground structures”).
The three or six dimensions described earlier are not absolute terms, but are related to the order of placement of the object in the coordinate system of the field in which the object is placed. Since the dimension is related to the spread of an object, i.e., the relationship between its “totally confined inner space” and its “outer space”, since the outer space is infinite, and since the outer space does not affect inner space without breaking the dimension, the three or six dimensions remain invariant under mutual transformation of the axes. If we rotate the object so that x-axis changes to the y-axis or z-axis, there is no effect on the structure (spread) of the object, i.e. the relative positions between different points on the body and their relationship to the space external to it remain invariant. Based on the positive and negative directions (spreading out from or contracting towards) the origin, these describe six unique functions of position, i.e. (x,0,0), (-x,0,0), (0,y,0), (0,-y,0), (0,0,z), (0,0,-z), that remain invariant under mutual transformation. Besides these, there are four more unique positions, namely (x, y), (-x, y), (-x, -y) and (x, -y) where x = y for any value of x and y, which also remain invariant under mutual transformation. These are the ten dimensions and not the so-called “mathematical structures”. Since time does not fit in this description, it is not a dimension. These are described in detail in a book “Vaidic Theory of Numbers” written by us and published on 30-06-2005. Unless the dimensional boundary is broken, the particle cannot interact with other particles. Thus, dimension is very important for all interactions.
While the above description applies to rigid body structures, it cannot be applied to fluids, whose dimensions depend upon their confining particle or base. Further, the rigid body structures have a characteristic resistance to destabilization of their dimension by others (we call it vishtambhakatwa). Particles with this characteristic are called fermions (we call it dhruva also, which literally means fixed structure). This resistance to disruption of its position, which is based on its internal energy and the inertia of restoration, is known as the potential energy of the particle. Unless this energy barrier is broken, the particle cannot interact with other particles. While discussing what an electron is, we have shown the deficiencies in the concepts of electronegativity and electron affinity. We have discussed the example of NaCl to show that the belief that ions tend to attain the electronic configuration of noble gases is erroneous. Neither sodium nor chlorine shows the tendency to become neon or argon. Their behaviour can be explained by the theory of transition states in micro level and the escape velocity in macro level.
In the case of fluids, the relationship between its “totally confined inner space” and its “outer space” is regulated not only by the nature of their confinement, but also by their response to density gradients and applied forces that change these gradients. Since this relationship between the “outer space” and “inner space” cannot be uniquely defined in the case of fluids including gases, and since their state at a given moment is subject to change at the next moment beyond recognition, the combined state of all such unknown dimensions are said to be in a superposition of states. These are called bosons (we call it dhartra also). The mass-less particles cannot be assigned such characteristics, as dimension is related to mass. Hence such particles cannot be called bosons, but must belong to a different class (we call them dharuna). Photons belong to this third class.
The relationship between the “inner space” and the “outer space” depends on the relative density of both. Since the inner space constitutes a three layer structure, (i.e., the core or the nucleus, extra-nucleic part and the outer orbitals in atoms and similar arrangement in others), the relationship between these stabilizes in seven different ways (2l + 1). Thus, the effects of these are felt in seven different ways by bodies external to these, which fall off with distance. These are revealed as the seven types of gravitation.
Dimension is a feature of mass, which is determined by both volume and density. The volume and density are also features of charges, which, in a given space is called force. Thus, both mass and charge/force are related, but they explain different aspects of the objects. In spherical bodies from stars to protons, density is related to volume and volume is related to radius. Volume varies only with radius, which, in turn, inversely varies with density. Thus, for a given volume with a given density, increase or decrease in volume and density are functions of its radius or diameter, i.e., proximity or distance between the center of mass and its boundary. When due to some reason the equilibrium of the volume or density is violated, the broken symmetry gives rise to the four plus one fundamental forces of nature.
We consider radioactive decay a type of fundamental interaction. These interactions are nothing but variable interactions between the nucleus representing mass (vaya) and the boundary (vayuna) determined by the diameter, mediated by the charge – the interacting force (vayonaadha). We know that the relationship between the centre and the boundary is directly related to diameter. We also know that scaling up or down the diameter keeping the mass constant is inversely proportional to the density of the body. Bodies with different density co-exist at different layers, but are not coupled together. Thus, the mediating force can be related to each of these proximity-distance interactions between the centre and the boundary. These are the four fundamental interactions.
The proximity-proximity variables give rise to the so-called strong interaction that bring the centre of mass and the boundary towards each other confining them (we call such interactions antaryaama). However, there are conceptual difference between the modern theory and our derivation. The strong force was invented to counteract the electromagnetic repulsion between protons in the nucleus. It is said that its influence is limited to a radius of 10-15m. The question is, how do the protons come that close for the strong force to be effective? If they can come that close without repelling each other without any other force, then the view that equal charges repel needs modification instead of introducing the strong force. If the strong force drops off in order to keep it away from interacting with nearby electrons as fast as is claimed, then it doesn’t explain nuclear creation at all. In that case protons can never interact with electrons.
Further, since the strong force has no electromagnetic force to overcome with neutrons, one would expect neutrons to either be crushed or thrown out of the nucleus by it. Modern theory suggests that it is prevented by the strong force proper, which is a binding force between quarks, via gluons, and the nuclear force, which is a “residue” of the strong force proper and acts between nucleons. It is suggested that the nuclear force does not directly involve the force carriers of QCD - the gluons. However, just as electrically neutral atoms (each said to be composed of canceling charges) attract each other via the second-order effects of electrical polarization, via the van der Waals forces, by a similar analogy, “color-neutral” nucleons may attract each other by a type of polarization which allows some basically gluon-mediated effects to be carried from one color-neutral nucleon to another, via the virtual mesons which transmit the forces, and which themselves are held together by virtual gluons. The basic idea is that the nucleons are “color-neutral”, just as atoms are “charge-neutral”. In both cases, polarization effects acting between near-by neutral particles allow a “residual” charge effect to cause net charge-mediated attraction between uncharged species, although it is necessarily of a much weaker and less direct nature than the basic forces which act internally within the particles. Van der Waals forces are not understood mechanically. Hence this is like explaining a mystery by an enigma through magic.
It is said that: “There is a high chance that the electron density will not be evenly distributed throughout a non-polar molecule. When electrons are unevenly distributed, a temporary multi-pole exists. This multi-pole will interact with other nearby multi-poles and induce similar temporary polarity in nearby molecules”. But why should the electrons not be evenly distributed? What prevents it from being evenly distributed? There is no evidence that electrons are unevenly distributed. According to the Uncertainty Principle, we cannot know the position of all the electrons simultaneously. Since the electrons are probabilities, we cannot know their distribution either. If electrons are probabilities, there is neither a high chance nor a low chance that electrons are unevenly distributed. The claim that there is a “high chance” is not supported by any evidence.
It is said that: “The strong force acting between quarks, unlike other forces, does not diminish in strength with increasing distance, after a limit (about the size of a hadron) has been reached... In QCD, this phenomenon is called color confinement, implying that only hadrons can be observed; this is because the amount of work done against a force of 10 newtons is enough to create particle-antiparticle pairs within a very short distance of an interaction. Evidence for this effect is seen in many failed free quark searches”. Non- observance of free quarks does not prove that the strong force does not diminish in strength with increasing distance. This is wrong assertion. We have a different explanation for the observed phenomenon.
Mr. Feynman came up with his (in)famous diagrams that explained nuclear forces between protons and neutrons using pions to mediate, but like Yukawa potentials, these diagrams are derived not from mechanical theory but from experiment. Both the diagrams and the potentials are completely heuristic. Neither “explanation” explains anything - they simply illustrate the experiment. It is just a naming, not an unlocking of a mechanism. Mr. Yukawa came up with the meson mediation theory of the strong force. He did not explain how trading or otherwise using a pion as mediation could cause an attractive force like the strong nuclear force. How can particle exchange cause attraction? Mr. Feynman did not change the theory, he simply illustrated it. Nor did Mr. Feynman provide a mechanism for the force. Both avoided the central question: “Why does not the strong force or the nuclear force act differently on protons and neutrons?” If the proton and neutron have no electromagnetic repulsion and a strong nuclear force is binding them, then the neutron should be more difficult to separate from the nucleus than the proton. If the strong force were only a little stronger than the electromagnetic force, it would require only the difference of the two to free the proton from the nucleus, but it would require overcoming the entire strong force to free the neutron. For this reason the standard model proposes a strong force 100 times stronger than the electromagnetic force. This lowers the difference in binding energies between the neutron and proton to cover up the problem. But this is reverse postulation!
Like Yukawa’s field (discussed later), it does not have any mechanics. The view that “Carrier particles of a force can themselves radiate further carrier particles”, is different from the QED, where the photons that carry the electromagnetic force, do not radiate further photons. There is no physical explanation for how carrier particles radiate further carrier particles, or how any radiation of any particles, primary or secondary, can cause the attractive force in the nucleus. Mr. Weinberg was forced to admit this in Volume II. p. 329 of his book: “The Quantum Theory of Fields” that the equation: gs2 = g2 = (5/3)g'2 is “in gross disagreement with the observed values of the coupling constants”.
The variable gs is supposed to stand for the strong force, but here Mr. Weinberg has it of the same size as the weak force. Mr. Weinberg says that there is an explanation for this and that his solution only applies to masses at the scale of the big W bosons. But there is no evidence that these big gauge bosons have anything to do with the strong force. There is no experimental evidence that they have anything to do with creating any of the coupling constants. Even in the standard model, the connection of large gauge bosons to strong force theory is tenuous or non-existent. So not only Mr. Weinberg was not able to clarify the mechanics of the strong force, but also he was forced to admit that the gauge mathematics does not even work.
It is said that: “Half the momentum in a proton is carried by something other than quarks. This is indirect evidence for gluons. More direct evidence follows from looking at the reaction e+e- → q qbar. At high energies, most of the time these events appear as two jets, one formed from the materialization of the quark and the other formed from the anti-quark. However, for a fraction of the time, three jets are seen. This is believed to be due to the process qq bar + gluon”. Even from the point of view of the standard model, it is difficult to explain how could half the momentum fail to be carried by the particles that comprise the particle itself? We need some sort of mechanical explanation for that. The momentum is caused by mass. Why would gluons make up 50% of the lost momentum? What is the evidence in support of giving full 50% of a real parameter to ad hoc particles? How can carrier particles carry half the real momentum? These are the mediating or carrier particles in the theory with zero-evidence. If gluons are field particles, they must be able to travel. When they are in transit, their momentum cannot be given to the proton. The gluon either travels to transmit a force, or it does not. If it travels, it cannot make up 50% of the momentum of the proton. If it does not travel, then it cannot transmit the force. Thus, the theory of the strong force is severely flawed.
We explain the strong force by a mechanism called “chiti”, which literally means consolidation. While discussing Coulomb’s law in later pages, we will show that contrary to popular belief, charge interaction in all emission fields takes place in four different ways. Two positively charged particles interact by exploding. But it is not so for interaction between two negatively charged particles. Otherwise there would be no electricity. The strong force holds the positively charged particles together. This process generates spin. We will discuss the mechanism while describing spin. Proximity-distance variables generate weak interaction (vahiryaama) where only the boundary shifts. This process also gives rise to angular momentum. Both strong forces and weak forces consolidate (we call it samgraha) two particles. While the strong force consolidates it fully (we call it dhaarana), the weak force consolidates both partially.
Distance-proximity variables generate electromagnetic interaction where the bound field interacts with the centre of mass of other particles (upayaama). The modern view that messenger photons mediate electromagnetic interaction is erroneous, as the photon field cannot create electricity or magnetism without the presence of an ion field. The photons must drive electrons or positive ions in order to create the forces of electricity and magnetism. Normally, the mass-less photons cannot create macro-fields on their own. Further, since photon is said to be its own anti-particle, how does the same particle cause both attraction and repulsion? Earlier we had pointed out at the back-ground structure and its relationship with universal constants. When minimal energy moves through the universal back-ground structure, it generates light. This transfer of momentum is known as the photon. Since the density of the universal back-ground structure is minimum, the velocity of light is the maximum.
Distance-distance variables generate radioactive disintegration that leads to a part of the mass from the nucleus to be ejected (yaatayaama) in beta decay (saamparaaya gati) to be coupled with a negatively charged particle. We will explain the mechanism separately.
These four are direct contact interactions (dhaarana) which operate from within the body. All four are complimentary forces and are needed for particle formation, as otherwise stable chemical reactions would be impossible. For formation of atoms with higher and lower mass numbers, only the nucleus (and not the full body) interacts with the other particles. Once the centre of mass is determined, the boundary is automatically fixed, as there cannot be a centre without a boundary. Gravitational interaction (udyaama), which stabilizes the orbits of two particles or bodies around their common barycentre at the maximum possible distance (urugaaya pratishthaa), belong to a different class altogether, as it is partial interaction between the two bodies treating each as a whole and without interfering with their internal dynamics (aakarshana). This includes gravitational interaction between sub-systems within a system. The internal dynamics of the sub-systems are not affected by gravitation.
Action is said to be an attribute of the dynamics of a physical system. Physical laws specify how a physical quantity varies over infinitesimally small changes in time, position, or other independent variables in its domain. It is also said to be a mathematical function, which takes the trajectory (also called path or history), of the system as its argument and has a real number as its result. Generally, action takes different values for different paths. Classical mechanics postulates that the path actually followed by a physical system is that for which the action is minimized, or, is stationary. These statements are evidently self-contradictory. A stationary path is position and not action. The particle and its forces/fields may be useful “mathematical concepts”, but they are approximations to reality and do not physically exist by themselves. There is a fundamental flaw in such description because it considers the effect of the four fundamental forces described above not together, but separately.
For example, while discussing Coulomb’s law, it will be shown that when Mr. Rutherford proposed his atomic model, he assumed that the force inside the atom is an electrostatic force. Thus, his equations treat the scattering as due to the Coulomb force, with the nucleus as a point-charge. Both his equations and his size estimates are still used though they have been updated (but have never been seriously recalibrated, much less reworked). This equation matches data up to a certain kinetic energy level, but fails after that. Later physicists have assigned interaction with the strong force in addition to the weak force to explain this mismatch. But even there, gravity and radioactive disintegration have been ignored. We will discuss the fallacies in this explanation while discussing electroweak theory.
Since all actions take place after application of energy, which is quantized, what the above descriptions physically mean is that; action is the effect of application of force that leads to displacement. Within the dimensional boundary, it acts as the four fundamental forces of Nature that are responsible for formation of particles (we call it vyuhana – literally stitching). Outside the dimensional boundary, it acts as the gravitational interaction that moves the bodies in fixed orbits (we call it prerana – literally dispatch). After initial displacement, the force ceases to act on the particle and the particle moves on inertia. The particle then is subjected to other forces, which changes its state again. This step-by-step interaction with various forces continues in a chain reaction (we call it dhaara). The effects of the four forces described in the previous para are individually different: total confinement (aakunchana), loose confinement (avakshepana), spreading from high concentration to low concentration (prasaarana) and disintegration (utkshepana). Thus, individually these forces can continuously displace the particle only in one direction. Hence they cannot change the state of any particle beyond this. The change of state is possible only when all these forces act together on the body. Since these are inherent properties of the body, they can only be explained as transformation of the same force into these four forces. That way we can unite all forces.
Gravity between two bodies stabilizes their orbits based on the mass-energy distribution over an area at the maximum possible distance (urugaaya pratisthaa). It is mediated by the field that stabilizes the bodies in proportion to their dimensional density over the area. Thus, it belongs to a different class where the bodies interact indirectly through the field (aakarshana). When it stabilizes proximally, it is called acceleration due to gravity. When it stabilizes at a distance, it is known as gravitation (prerana or gamana). Like the constant for acceleration due to gravity g varies from place to place, the G also varies from system to system, though it is not locally apparent. This shows that not only the four fundamental forces of Nature, but also gravitation is essential for structure formation, as without it, even the different parts of the body will not exist in a stable configuration.
The above principle is universally seen in every object or body. In the human body, the breathing in (praana) represents strong interaction, the breathing out (also other excretory functions - apaana) represents radioactive disintegration, the functions of heart and lungs (vyaana and udaana) represent weak interaction and electromagnetic interactions respectively, and the force that does the fine-tuning (samaana) represents gravitation.
The concept can be further explained as follows: Consider two forces of equal magnitude but opposite in direction acting on a point (like the centre of mass and the diameter that regulate the boundary of a body). Assuming that no other forces are present, the system would be in equilibrium and it would appear as if no force is acting on it. Now suppose one of the forces is modified due to some external interaction. The system will become unstable and the forces of inertia, which were earlier not perceptible, would appear as a pair of two oppositely directed forces. The magnitude of the new forces would not be the same as the earlier forces, because it would be constantly modified due to changing mass-energy distribution within the body. The net effect on the body due to the modified force would regulate the complementary force in the opposite direction. This is reflected in the apparently elliptical orbits of planets. It must be remembered that a circle is a special case of an ellipse, where the distance between the two foci is zero.
All planets go round the Sun in circular orbits with radius r, whose center is the Sun itself. Due to the motion of the Sun, the center of the circle shifts in a forward direction, i.e., the direction of the motion of the Sun by ∆r making the new position r0+∆r in the direction of motion. Consequently, the point in the opposite direction shifts to a new position r0-∆r because of the shifted center. Hence, if we plot the motion of the planets around the Sun and try to close the orbit, it will appear as if it is an ellipse, even though it is never a closed shape. The picture below depicts this phenomenon.
An ellipse with a small eccentricity is identical to a circular orbit, in which the center of the circle has been slightly shifted. This can be seen more easily when we examine in detail the transformations of shapes from a circle to an ellipse. However, when a circle is slightly perturbed to become an ellipse, the change of shape is usually described by the gradual transformation from a circle to the familiar elongated characteristic shape of an ellipse. In the case of the elliptical shape of an orbit around the sun, since the eccentricity is small, this is equivalent to a circle with a shifted center, because in fact, when adding a small eccentricity, the first mathematical term of the series expansion of an ellipse appears as a shift of the central circular field of forces. It is only the second term of the series expansion of an ellipse, which flattens the orbit into the well-known elongated shape. It may be noted that in an elliptical orbit, the star is at one of the two foci. That specific focus determines the direction of motion.
Now let us examine the general concept of elliptical orbits. The orbital velocity of an orbiter at any point in the orbit is the vector addition of the two independent motions; i.e., the centripetal acceleration at that point in the field, which determines the curve and the tangential velocity, which is a constant and which moves in a straight line. The orbiter must retain its innate motion throughout the orbit irrespective of the shape of the orbit. Otherwise, its innate motion would dissipate. In that case, the orbit would not be stable. Therefore, the orbiter always retains its innate motion over each and every differential. If we take the differentials at perihelion and aphelion and compare them, we find that the tangential velocities due to innate motion are equal, meaning that the velocity tangent to the ellipse is the same in both places. But the accelerations are vastly different. Yet the ellipse shows the same curvature at both places. If we draw a line joining the perihelion and aphelion and bisect it, the points where this line intersects the orbit shows equal velocities, but in opposite directions. Thus, one innate motion shows itself in four different ways. These are macro manifestations of the four fundamental forces of Nature, as explained below.
From Kepler’s second law (The Law of Equal Areas), we know that an imaginary line drawn from the center of the sun to the center of the planet will sweep out equal areas in equal intervals of time. Thus, the apparent velocity of the planet at perihelion (closest point, where the strength of gravity would be much more) is faster than that at the aphelion (farthest point, where the strength of gravity would be much less). Assuming the planets to have equal mass, these cannot be balanced (since distances are different). There is still a net force that keeps the near orbit planet (or at perihelion) to slide away fast, but allows the far orbit planet (or at aphelion) to apparently move slowly. There are the proximity-proximity and proximity-distance variables. Since the proximity-proximity interaction happens continuously that keeps the planet at a constant tangential velocity, we call this motion nitya gati – meaning perpetual motion. Since the proximity-distance interaction leads to coupling of one particle with other particles like proton-neutron reaction at the micro level or the centripetal acceleration to the planet at the macro level, we call this motion yagnya gati – meaning coupled motion.
The motion of the planets at the two points where the mid point of the ellipse intersects its circumference, are the distance-proximity and distance-distance variables. It is because at one points the planet moves towards net lower velocity, whereas at the other point, it moves towards net higher velocity. We call the former motion samprasaada gati – meaning constructive motion because it leads to interaction among particles and brings the planet nearer the Sun. We call the beta particle suparna – meaning isolated radioactive particle. Hence we call the latter motion saamparaaya gati – meaning radioactive disintegration.
Now, let us consider the example of Sun-Jupiter orbit. The mass of Jupiter is approximately 1/1047 of that of the Sun. The barycenter of the Sun-Jupiter system lies above the Sun’s surface at about 1.068 solar radii from the Sun’s center, which amounts to about 742800 km. Both the Sun and Jupiter revolve around this point. At perihelion, Jupiter is 741 million km or 4.95 astronomical units (AU) from the Sun. At aphelion it is 817 million km or 5.46 AU. That gives Jupiter a semi-major axis of 778 million km or 5.2 AU and a mild eccentricity of 0.048. This shows the near relationship between relative mass and barycenter point that balances both bodies. This balancing force that stabilizes the orbit is known as gravity.
If the bodies have different masses, the forces exerted by them on the external field would not be equal. Thus, they would be propelled to different positions in the external field, where the net density over the area would be equal for both. Obviously this would be in proportion to their masses. Thus, the barycenter, which represents the center of mass of the system, is related to proportionate mass between the two bodies. The barycenter is one of the foci of the elliptical orbit of each body. It changes continuously due to the differential velocity of the two bodies. When these effects appear between the centre of mass and the boundary of a body, these are termed as the four fundamental forces of Nature: strong force and radioactive disintegration form one couple and weak force and electromagnetic force form the other less strong couple. The net effect of the internal dynamics of the body (inner space dynamics) is expressed as its charge outside it.
Assuming that gravity is an attractive force, let us take the example of the Sun attracting Jupiter towards its present position S, and Jupiter attracting the Sun towards its present position J. The two forces are in the same line and balance. If both bodies are relatively stationary objects or moving with uniform velocity with respect to each other, the forces, being balanced and oppositely directed, would cancel each other. But since both are moving with different velocities, there is a net force. The forces exerted by each on the other will take some time to travel from one to the other. If the Sun attracts Jupiter toward its previous position S’, i.e., when the force of attraction started out to cross the gulf, and Jupiter attracts the Sun towards its previous position J’, then the two forces give a couple. This couple will tend to increase the angular momentum of the system, and, acting cumulatively, it will soon cause an appreciable change of period. The cumulative effect of this makes the planetary orbits to wobble as shown below.
Before we re-examine the Lorentz force law in light of the above description, we must re-examine the mass energy equivalence equation. The equation e = mc2 is well established and cannot be questioned. But its interpretation must be questioned for the simple reason that it does not conform to mathematical principles. But before that let us note some facts Mr. Einstein either overlooked or glossed over.
It is generally accepted that Space is homogeneous. We posit that space only “looks” homogeneous over very large scales, because what we perceive as space is the net effect of radiation reaching our eyes or the measuring instrument. Since mass-energy density at different points in space varies, it cannot be homogenous. Magnetic force acts only between magnetic substances and not between all substances at the same space. Gravity interacts only with mass. Whether inside a black hole or in open space, it is only a probability amplitude distribution and it is part of the fields that exist in the neighborhood of the particles. Thus, space cannot be homogeneous. This has been proved by the latest observations of the Cosmic Microwave Background - the so-called afterglow of the big bang. This afterglow is not perfectly smooth - hot and cold spots speckle the sky. In recent years, scientists have discovered that these spots are not quite as randomly distributed as they first appeared - they align in a pattern that point out a special direction in space. Cosmologists have dubbed it the “axis of evil”. More hints of a cosmic arrow come from studies of supernovae, stellar cataclysms that briefly outshine entire galaxies. Cosmologists have been using supernovae to map the accelerating expansion of the universe. Detailed statistical studies reveal that supernovae are moving even faster in a line pointing just slightly off the “axis of evil”. Similarly, astronomers have measured galaxy clusters streaming through space at a million miles an hour toward an area in the southern sky.
For the same reason, we cannot accept that space is isotropic. Considering the temperature of the cosmic background radiation (-2.730 K) as the unit, the absolute zero, which is a notch below the melting point of Helium at -2720 K, is exactly 100 times less than the freezing point of water. Similarly, the interiors of stars and galaxies are a maximum of 1000 times hotter than the melting point of carbon, i.e., 35000 K. The significance of these two elements is well known and can be discussed separately. The ratio of 100:1000 is also significant. Since these are all scattered in space – hence affect its temperature at different points - space cannot be isotropic either. We have hot stars and icy planets and other Kuiper Belt Objects (KBO’s) in space. If we take the average, we get a totally distorted picture, which is not the description of reality.
Space is not symmetric under time translation either. Just like space is the successive interval between all objects in terms of nearness or farness from a designated point or with reference to the observer, time is the interval between successive changes in the states of the objects in terms of nearness or farness from a designated epoch or event or the time of measurement. Since all objects in space do not continuously change their position with respect to all other objects, space is differentiated from time, which is associated with continuous change of state of all objects. If we measure the spread of the objects, i.e., the relationship between its “inner space” and its “outer space” from two opposite directions, there is no change in their position. Thus the concept of negative direction of space is valid. Time is related to change of state, which materializes because of the interaction of bodies with forces. Force is unidirectional. It can only push. There is nothing as pull. It is always a complementary push from the opposite direction. (Magnetism acts only between magnetic substances and not universally like other forces. Magnetic fields do not obey the inverse square law. It has a different explanation). Consider an example: A + B → C + D.
Here a force makes A interact with B to produce C and D. The same force doesn’t act on C and D as they do not exist at that stage. If we change the direction of the force, B acts on A. Here only the direction of force and not the interval between the states before and after application of force (time) will change. Moreover, C and D do not exist even at that stage. Hence the equation would be:
B + A → C + D and not B + A ← C + D.
Thus, it does not affect causality. There can be no negative direction for time or cause and effect. Cause must precede effect.
Space is not symmetric under a “boost” either. That the equations of physics work the same in moving coordinate system as in the stationary system has nothing to do with space. Space in no way interacts with or affects it.
Transverse waves are always characterized by particle motion being perpendicular to the wave motion. This implies the existence of a medium through which the reference wave travels and with respect to which the transverse wave travels in a perpendicular direction. In the absence of the reference wave, which is a longitudinal wave, the transverse wave can not be characterized as such. All transverse waves are background invariant by its very definition. Since light is propagated in transverse waves, Mr. Maxwell used a transverse wave and aether fluid model for his equations. Mr. Feynman has shown that Lorentz transformation and invariance of speed of light follows from Maxwell’s equations. Mr. Einstein’s causal analysis in SR is based on Mr. Lorentz’s motional theory where a propagation medium is essential to solve the wave equation. Mr. Einstein’s ether-less relativity is not supported by Maxwell’s Equations nor the Lorentz Transformations, both of which are medium (aether) based. Thus, the non-observance of aether drag (as observed in Michelson-Morley experiments) cannot serve to ultimately disprove the aether model. The equations describing spacetime, based on Mr. Einstein’s theories of relativity, are mathematically identical to the equations describing ordinary fluid and solid systems. Yet, it is paradoxical that physicists have denied aether model while using the formalism derived from it. They don’t realize that Mr. Maxwell used transverse wave model, whereas aether drag considers longitudinal waves. Thus, the notion that Mr. Einstein’s work is based on “aether-less model” is a myth. All along he used the aether model, while claiming the very opposite.
If light consists of particles, as Mr. Einstein had suggested in his 1911 paper, the principle of constancy of the observed speed of light seems absurd. A stone thrown from a speeding train can do far more damage than one thrown from a train at rest; since the speed of the particle is not independent of the motion of the object emitting it. And if we take light to consist of particles and assume that these particles obey Newton’s laws, then they would conform to Newtonian relativity and thus automatically account for the null result of the Michelson-Morley experiment without recourse to contracting lengths, local time, or Lorentz transformations. Yet, Mr. Einstein resisted the temptation to account for the null result in terms of particles of light and simpler, familiar Newtonian ideas, and introduced as his second postulate something that was more or less obvious when thought of in terms of waves in an aether.
Mr. Maxwell’s view that - the sum total of the electric field around a volume of space is proportional to the charges contained within - has to be considered carefully. Charge always flows from higher concentration to the lower concentration till the system acquires equilibrium. But then he says about “around a volume of space” and “charges contained within.” This means a confined space, i.e., an object and its effects on its surrounding field. It is not free or unbound space.
Similarly, his view that - the sum total of the magnetic field around a volume of space is always zero, indicating that there are no magnetic charges (monopoles) - has to be considered carefully. With a bar magnet, the number of field lines “going in” and those “going out” cancel each other out exactly, so that there is no deficit that would show up as a net magnetic charge. But then we must distinguish between the field lines “going in” and “going out”. Electric charge is always associated with heat and magnetic charges with the absence of heat or confinement of heat. Where the heat component dominates, it pushes out and where the magnetic component dominates, it confines or goes in. This is evident from the magnetospheric field lines and reconnections of the Earth-Sun and the Saturn-Sun system. This is the reason why a change over time in the electric field or a movement of electric charges (current) induces a proportional vorticity in the magnetic field and a change over time in the magnetic field induces a proportional vorticity in the electric field, but in the opposite direction. In what is called free space, these conditions do not apply, as charge can only be experience by a confined body. We don’t need the language of vector calculus to state these obvious facts.
In the example of divergence, usually it is believed that if we imagine the electric field with lines of force, divergence basically tells us how the lines are “spreading out”. For the lines to spread out; there must be something to “fill the gaps”. These things would be particles with charge. But there are no such things in empty space, so it is said that the divergence of the electric field in empty space is identically zero. This is put mathematically as: div E = 0 and div B = 0.
The above statement is wrong physics. Since space is not empty, it must have something. There is nothing in the universe that does not contain charge. After all, even quarks and leptons have charge. Neutrons have a small residual negative charge (1/11 of electron as per our calculation). Since charges cannot be stationary unless confined, i.e., unless they are contained in or by a body, they must always flow from higher concentration to lower concentration. Thus, empty space must be full of flowing charge as cosmic rays and other radiating particles and energies. In the absence of sufficient obstruction, they flow in straight lines and not in geodesics.
This does not mean that convergence in space is a number or a scalar field, because we know that, mean density of free space is not the same everywhere and density fluctuations affect the velocity of charge. As an example, let us dump huge quantities of common salt or gelatin powder on one bank of the river water flowing with a constant velocity. This starts diffusing across the breadth of the pool, imparting a viscosity gradient. Now if we put a small canoe on the river, the canoe will take a curved path just like light passing by massive stars bend. We call this “vishtambhakatwa”. The bending will be proportional to the viscosity gradient. We do not need relativity to explain this physics. We require mathematics only to calculate “how much” the canoe or the light pulse will be deflected, but not whether it will be deflected or why, when and where it is deflected. Since these are proven facts, div E = 0 and div B = 0 are not constant functions and a wrong descriptions of physics.
Though Mr. Einstein has used the word “speed” for light (“die Ausbreitungs-geschwindigkeit des Lichtes mit dem Orte variiert” - the speed of light varies with the locality”), all translations of his work convert “speed” to “velocity” so that scientists generally tend to think it as a vector quantity. They tend to miss the way Mr. Einstein refers to ‘c’, which is most definitely speed. The word “velocity” in the translations is the common usage, as in “high velocity bullet” and not the vector quantity that combines speed and direction. Mr. Einstein held that the speed varies with position, hence it causes curvilinear motion. He backed it up in his 1920 Leyden Address, where he said: “According to this theory the metrical qualities of the continuum of space-time differ in the environment of different points of space-time, and are partly conditioned by the matter existing outside of the territory under consideration. This space-time variability of the reciprocal relations of the standards of space and time, or, perhaps, the recognition of the fact that ‘empty space’ in its physical relation is neither homogeneous nor isotropic, compelling us to describe its state by ten functions (the gravitation potentials gμν), has, I think, finally disposed of the view that space is physically empty”. This is a complex way of telling the obvious.
Einsteinian space-time curvature calculations were based on vacuum, i.e. on a medium without any gravitational properties (since it has no mass). Now if a material medium is considered (which space certainly is), then it will have a profound effect on the space-time geometry as opposed to that in vacuum. It will make the gravitational constant differential for different localities. We hold this view. We do not fix any upper or lower limits to the corrections that would be applicable to the gravitational constant. We make it variable in seven and eleven groups. We also do not add a repulsive gravitational term to general relativity, as we hold that forces only push.
Since space is not empty, it must have different densities at different points. The density is a function of mass and change of density is a function of energy. Thus, the equation: e = mc2 does not show mass energy equivalence, but the density gradient of space. The square of velocity has no physical meaning except when used to measure an area of length and breadth equal to the distance measured by c. The above equation does not prove mass energy convertibility, but only shows the energy requirement to spread a designated quantity of mass over a designated area, so that the mean density can be called a particular type of sub- field or jaala – as we call it.
The interactions we discussed while defining dimension appear to be different from those of strong/weak/electromagnetic interactions. The most significant difference involves the weak interactions. It is thought to be mediated by the high energy W and Z bosons. Now, we will discuss this aspect.
The W boson is said to be the mediator in beta decay by facilitating the flavor change or reversal of a quark from being a down quark to being an up quark: d → u + W-. The mass of a quark is said to be about 4MeV and that of a W boson, about 80GeV – almost the size of an iron atom. Thus, the mediating particle outweighs the mediated particle by a ratio of 20,000 to 1. Since Nature is extremely economical in all operations, why should it require such a heavy boson to flip a quark over? There is no satisfactory explanation for this.
The W- boson then decays into an electron and an antineutrino: W- → e + v. Since the neutrinos and anti-neutrinos are said to be mass-less and the electron weighs about 0.5MeV, there is a great imbalance. Though the decay is not intended to be an equation, a huge amount of energy magically appearing from nowhere at the required time and then disappearing into nothing, needs explanation. We have shown that uncertainty is not a law of Nature, but is a result of natural laws relating to measurement that reveal a kind of granularity at certain levels of existence that is related to causality. Thus, the explanations of Dirac and others in this regard are questionable.
Messers Glashow, Weinberg, and Salam “predicted” the W and Z bosons using an SU (2) gauge theory. But the bosons in a gauge theory must be mass-less. Hence one must assume that the masses of the W and Z bosons were “predicted” by some other mechanism to give the bosons its mass. It is said that the mass is acquired through Higgs mechanism - a form of spontaneous symmetry breaking. But it is an oxymoron. Spontaneous symmetry breaking is symmetry that is broken spontaneously. Something that happens spontaneously requires no mechanism or mediating agent. Hence the Higgs mechanism has to be spontaneous action and not a mechanism. This does not require a mediating agent – at least not the Higg’s boson. Apparently, the SU (2) problem has been sought to be solved by first arbitrarily calling it a symmetry, then pointing to the spontaneous breaking of this symmetry without any mechanism, and finally calling that breaking the Higgs mechanism! Thus, the whole exercise produces only a name!
A parity violation means that beta decay works only on left-handed particles or right handed anti-particles. Messers Glashow, Weinberg, and Salam provided a theory to explain this using a lot of complicated renormalized mathematics, which showed both a parity loss and a charge conjugation loss. However, at low energies, one of the Higgs fields acquires a vacuum expectation value and the gauge symmetry is spontaneously broken down to the symmetry of electromagnetism. This symmetry breaking would produce three mass-less Goldstone bosons but they are said to be “eaten” by three of the photon-like fields through the Higgs mechanism, giving them mass. These three fields become the W-, W+, and Z bosons of the weak interaction, while the fourth gauge field which remains mass-less is the photon of electromagnetism.
All the evidence in support of the Higgs mechanism turns out to be evidence that, huge energy packets near the predicted W and Z masses exist. In that case, why should we accept that because big particles equal to W and Z masses exist for very short times, the SU (2) gauge theory can’t be correct in predicting zero masses. And that the gauge symmetry must be broken, so that the Higgs mechanism must be proved correct without any mechanical reason for such breaking? There are other explanations for this phenomenon. If the gauge theory requires to be bypassed with a symmetry breaking, it is not a good theory to begin with. Normally, if equations yield false predictions - like these zero boson masses - the “mathematics” must be wrong. Because mathematics is done at “here-now” and zero is the absence of something at “here-now”. One can’t use some correction to it in the form of a non-mechanical “field mechanism”. Thus, Higgs mechanism is not a mechanism at all. It is a spontaneous symmetry breaking, and there is no evidence for any mechanism in something that is spontaneous.
Since charge is perceived through a mechanism, a broken symmetry that is gauged may mean that the vacuum is charged. But charge is not treated as mechanical in QED. Even before the Higgs field was postulated, charge was thought to be mediated by virtual photons. Virtual photons are non-mechanical ghostly particles. They are supposed to mediate forces spontaneously, with no energy transfer. This is mathematically and physically not valid. Charge cannot be assigned to the vacuum, since that amounts to assigning characteristics to the void. One of the first postulates of physics is that extensions of force, motion, or acceleration cannot be assigned to “nothing”. For charge to be mechanical, it would have to have extension or motion. All virtual particles and fields are imaginary assumptions. Higgs’ field, like Dirac’s field, is a “mathematical” imagery.
The proof for the mechanism is said to have been obtained in the experiment at the Gargamelle bubble chamber, which photographed the tracks of a few electrons suddenly starting to move - seemingly of their own accord. This is interpreted as a neutrino interacting with the electron by the exchange of an unseen Z boson. The neutrino is otherwise undetectable. Hence the only observable effect is the momentum imparted to the electron by the interaction. No neutrino or Z boson is detected. Why should it be interpreted to validate the imaginary postulate? The electron could have moved due to many other reasons.
It is said that the W and Z bosons were detected in 1983 by Carlo Rubbia. This experiment only detected huge energy packets that left a track that was interpreted to be a particle. It did not tell that it was a boson or that it was taking part in any weak mediation. Since large mesons can be predicted by other simpler methods (e.g., stacked spins; as proposed by some, etc), this particle detection is not proof of weak interaction or of the Higgs mechanism. It is only indication of a large particle or two.
In section 19.2, of his book “The Quantum Theory of Fields, Weinberg says: “We do not have to look far for examples of spontaneous symmetry breaking. Consider a chair. The equations governing the atoms of the chair are rotationally symmetric, but a solution of these equations, the actual chair, has a definite orientation in space”. Classically, it was thought that parity was conserved because spin is an energy state. To conserve energy, there must be an equal number of left-handed and right-handed spins. Every left-handed spin cancels a right-handed spin of the same size, so that the sum is zero. If they were created from nothing - as in the Big Bang - they must also sum up to nothing. Thus, it is assumed that an equal number of left-handed and right-handed spins, at the quantum level.
It was also expected that interactions conserve parity, i.e., anything that can be done from left to right, can also be done from right to left. Observations like beta decay showed that parity is not conserved in some quantum interactions, because some interactions showed a preference for one spin over the other. The electroweak theory supplied a mystical and non-mechanical reason for it. But it is known that parity is not conserved always. Weinberg seems to imply that because there is a chair facing west, and not one facing east, there is a parity imbalance: that one chair has literally lopsided the entire universe! This, he explains as a spontaneously broken symmetry!
A spontaneously broken symmetry in field theory is always associated with a degeneracy of vacuum states. For the vacuum the expectation value of (a set of scalar fields) must be at a minimum of the vacuum energy. It is not certain that in such cases the symmetry is broken, because there is the possibility that the true vacuum is a linear superposition of vacuum states in which the summed scalar fields have various expectation values, which would respect the assumed symmetry. So, a degeneracy of vacuum states is the fall of these expectation values into a non-zero minimum. This minimum corresponds to a state of broken symmetry.
Since true vacuum is non-perceptible; hence nothingness; with only one possible state – zero – logically it would have no expectation values above zero. However, Mr. Weinberg assumed that the vacuum can have a range of non-zero states, giving both it and his fields a non-zero energy. Based on this wrong assumption, Mr. Weinberg manipulated these possible ranges of energies, assigning a possible quantum effective action to the field. Then he started looking at various ways it might create parity or subvert parity. Since any expectation value above zero for the vacuum is wholly arbitrary and only imaginary, he could have chosen either parity or non-parity. In view of Yang and Lee’s finding, Mr. Weinberg choose non-parity. This implied that his non-zero vacuum degenerates to the minimum. Then he applied this to the chair! Spontaneous symmetry breaking actually occurs only for idealized systems that are infinitely large. So does Mr. Weinberg claim that a chair is an idealized system that is infinitely large!
According to Mr. Weinberg, the appearance of broken symmetry for a chair arises because it has a macroscopic moment of inertia I, so that its ground state is part of a tower of rotationally excited states whose energies are separated by only tiny amounts, of the order h2/I. This gives the state vector of the chair an exquisite sensitivity to external perturbations, so that even very weak external fields will shift the energy by much more than the energy difference of these rotational levels. As a result, any rotationally asymmetrical external field will cause the ground state or any other state of the chair with definite angular momentum numbers to rapidly develop components with other angular momentum quantum numbers. The states of the chair that are relatively stable with respect to small external perturbations are not those with definite angular momentum quantum numbers, but rather those with a definite orientation, in which the rotational symmetry of the underlying theory is broken.
Mr. Weinberg declares that he is talking about symmetry, but actually he is talking about decoherence. He is trying to explain why the chair is not a probability or an expectation value and why its wave function has collapsed into a definite state. Quantum mathematics works by proposing a range of states. This range is determined by the uncertainty principle. Mr. Weinberg assigned a range of states to the vacuum and then extended that range based on the non-parity knowledge of Messers Yang and Lee. But the chair is not a range of states: it is a state – the ground state. To degenerate or collapse into this ground state, or decohere from the probability cloud into the definite chair we see and experience, the chair has to interact with its surroundings. The chair is most stable when the surroundings are stable (having “a definite orientation”); so the chair aligns itself to this definite orientation. Mr. Weinberg argues that in doing so, it breaks the underlying symmetry. Thus, Mr. Weinberg does not know what he is talking about!
Mr. Weinberg believes that the chair is not just probabilistic as a matter of definite position. Apparently, he believes it is probabilistic in spin orientation also. He even talks about the macroscopic moment of inertia. This is extremely weird, because the chair has no macroscopic angular motion. The chair may be facing east or west, but there is no indication that it is spinning, either clockwise or counter clockwise. Even if it were spinning, there is no physical reason to believe that a chair spinning clockwise should have a preponderance of quanta in it spinning clockwise. QED has never shown that it is impossible to propose a macro-object spinning clockwise, with all constituent quanta spinning counterclockwise. However, evidently Weinberg is making this assumption without any supporting logic, evidence or mechanism. Spin parity was never thought to apply to macro-objects. A chair facing or spinning in one direction is not a fundamental energy state of the universe, and the Big Bang doesn’t care if there are five chairs spinning left and four spinning right. The Big Bang didn’t create chairs directly out of the void, so we don’t have to conserve chairs!
Electroweak theory, like all quantum theories, is built on gauge fields. These gauge fields have built-in symmetries that have nothing to do with the various conservation laws. What physicists tried to do was to choose gauge fields that matched the symmetries they had found or hoped to find in their physical fields. QED began with the simplest field U (1), but the strong force and weak force had more symmetries and therefore required SU (2) and SU (3). Because these gauge fields were supposed to be mathematical fields (which is an abstraction) and not real physical fields, and because they contained symmetries of their own, physicists soon got tangled up in the gauge fields. Later experiments showed that the symmetries in the so-called mathematical fields didn’t match the symmetries in nature. However, the quantum theory could be saved if the gauge field could be somehow broken - either by adding ghost fields or by subtracting symmetries by “breaking” them. This way, the physicists landed up with 12 gauge bosons, only three of which are known to exist, and only one of which has been well-linked to the theory. Of these, the eight gluons are completely theoretical and only fill slots in the gauge theory. The three weak bosons apparently exist, but no experiment has tied them to beta decay. The photon is the only boson known to exist as a mediating “particle”, and it was known long before gauge theory entered the picture.
Quantum theory has got even the only verified boson – the photon – wrong, since the boson of quantum theory is not a real photon: it is a virtual photon! QED couldn’t conserve energy with a real photon, so the virtual photon mediates charge without any transfer of energy. The virtual photon creates a zero-energy field and a zero-energy mediation. The photon does not bump the electron, it just whispers a message in its ear. So, from a theoretical standpoint, the gauge groups are not the solution, they are part of the problem. We should be fitting the mathematics to the particles, not the particles to the mathematics. Quantum physicists claim repeatedly that their field is mainly experimental, but any cursory study of the history of the field shows that this claim is not true. Quantum physics has always been primarily “mathematical”. A large part of 20th century experiment was the search for particles to fill out the gauge groups, and the search continues, because they are searching blind folded in a dark room for a black cat that does not exist. When US Congress wanted to curtail funding research in this vain exercise; they named the hypothetical Higg’s boson (which is non-existent), as the “God particle” and tried to sway public opinion. Now they claim that they are “tantalizingly close” not to discover the “God particle”, but to “the possibility of getting a glimpse of it”. How long the scientists continue to fool the public!
Mr. Weinberg’s book proves the above statement beyond any doubt. 99% of the book is couched in leading “mathematics” that takes the reader through a mysterious maze. This “mathematics” has its own set of rules that defy logical consistency. It is not a tool to measure how much a system changes when some of its parameters change. It is like a vehicle possessed by a spirit. You climb in and it takes you where it wants to go! Quantum physicists never look at a problem without first loading it down with all the mathematics they know to make it thoroughly incomprehensible. The first thing they do is write everything as integrals and/or partial derivatives, whether they are needed to be so written or not. Then they bury their particles under matrices and action and Lagrangians and Hamiltonians and Hermitian operators and so on - as many stuff as they can apply. Only after thoroughly confusing everyone do they begin calculating. Mr. Weinberg admits that Goldstone bosons “were first encountered in specific models by Goldstone and Nambu.” It may be noted that the bosons were first encountered not in experiments. They were encountered in the mathematics of Mr. Goldstone and Mr. Nambu. As a “proof” of their existence, Mr. Weinberg offers an equation in which action is invariant under a continuous symmetry, and in which a set of Hermitian scalar fields are subjected to infinitesimal transformations. This equation also includes it, a finite real matrix. To solve it, he also needs the spacetime volume and the effective potential.
In equation 21.3.36, he gives the mass of the W particle: W = ev/2sinθ, where e is the electron field, v is the vacuum expectation value, and the angle is the electroweak mixing angle. The angle was taken from elastic scattering experiments between muon neutrinos and electrons, which gave a value for θ of about 28o. Mr. Weinberg develops v right out of the Fermi coupling constant, so that it has a value here of 247 GeV.
v ≈ 1/√GF
All these are of great interest due to the following reasons:
· There is no muon neutrino in beta decay, so the scattering angle of electrons and muon neutrinos don’t tell us anything about the scattering angles of protons and electrons, or electrons and electron antineutrinos. The electron antineutrino is about 80 times smaller than a muon neutrino, so it is hard to see how the scattering angles could be equivalent. It appears this angle was chosen afterwards to match the data. Mr. Weinberg even admits it indirectly. The angle wasn’t known until 1994. The W was discovered in 1983, when the angle was unknown.
· Mr. Fermi gave the coupling value to the fermions, but Mr. Weinberg gives the derived value to the vacuum expectation. This means that the W particle comes right out of the vacuum, and the only reason it doesn’t have the full value of 247 GeV is, the scattering angle and its relation to the electron. We were initially shocked in 1983 to find 80 GeV coming from nowhere in the bubble chamber, but now we have 247 GeV coming from nowhere. Mr. Weinberg has magically burrowed 247 GeV from the void to explain one neutron decay! He gives it back 10-25 seconds later, so that the loan is paid back. But 247 GeV is not a small quantity in the void. It is very big.
Mr. Weinberg says, the symmetry breaking is local, not global. It means he wanted to keep his magic as localized as possible. A global symmetry breaking might have unforeseen side-effects, warping the gauge theory in unwanted ways. But a local symmetry breaking affects only the vacuum at a single “point”. The symmetry is broken only within that hole that the W particle pops out of and goes back into. If we fill the hole back fast enough and divert the audience’s gaze with the right patter, we won’t have to admit that any rules were broken or that any symmetries really fell. We can solve the problem at hand, keep the mathematics we want to keep, and hide the spilled milk in a 10-25s rabbit hole.
Mr. Bryon Roe’s Particle Physics at the New Millennium deals with the same subject in a much more weird fashion. He clarifies: “Imagine a dinner at a round table where the wine glasses are centered between pairs of diners. This is a symmetric situation and one doesn’t know whether to use the right or the left glass. However, as soon as one person at the table makes a choice, the symmetry is broken and glass for each person to use is determined. It is no longer right-left symmetric. Even though a Lagrangian has a particular symmetry, a ground state may have a lesser symmetry”.
There is nothing in the above description that could be an analogue to a quantum mechanical ground state. Mr. Roe implies that the choice determines the ground state and the symmetry breaking. But there is no existential or mathematical difference between reality before and after the choice. Before the choice, the entire table and everything on it was already in a sort of ground state, since it was not a probability, an expectation, or a wave function. For one thing, prior choices had been made to bring it to this point. For another, the set before the choice was just as determined as the set after the choice, and just as real. De-coherence did not happen with the choice. It either happened long before or it was happening all along. For another, there was no symmetry, violation of which would have quantum effects. As with entropy, the universe doesn’t keep track of things like this: there is no conservation of wine glasses any more than there is a conservation of Mr. Weinberg’s chairs. Position is not conserved, nor is direction. Parity is a conservation of spin, not of position or direction. Mr. Roe might as well claim that declination, or lean, or comfort, or wakefulness, or hand position is conserved. Should we monitor chin angles at this table as well, and sum them up relative to the Big Bang?
Mr. Roe gives some very short mathematics for the Goldstone boson getting “eaten up by the gauge field” and thereby becoming massive, as follows:
L = Dβ*Dβ φ - μ 2φ*φ - λ(φ*φ)2 - (¼)FβνFβν
where Fβν = ∂νAβ - ∂βAν; Dβ = ∂β - igAβ ; and Aβ → Aβ + (1/g)∂βα(x)
Let φ1 ≡ φ1’ + ⟨0|φ1|0⟩ ≡ φ 1’ + v;v = √μ2/λ) and substitute:
New terms involving A are
(½)g2v2AνAν - gvAννφ 2
He says: “The first term is a mass term for Aν. The field has acquired mass!” But the mathematics suddenly stops. He chooses a gauge so that φ2 = 0, which deletes the last term above. But then he switches to a verbal description: “One started with a massive scalar field (one state), a massless Goldstone boson (one state) and a massless vector boson (two polarization states). After the transform there is a massive vector meson Aμ, with three states of polarization and a massive scalar boson, which has one state. Thus, the Goldstone boson has been eaten up by the gauge field, which has become massive”. But where is the Aμ in that derivation? Mr. Roe has simply stated that the mass of the field is given to the bosons, with no mathematics or theory to back up his statement. He has simply jumped from Aν to Aμ with no mathematics or physics in between!
The mathematics for positive vacuum expectation value is in section 21.3, of Mr. Weinberg’s book - the crucial point being equation 21.3.27. This is where he simply inserts his positive vacuum expectation value, by asserting that μ2<0 making μ imaginary, and finding the positive vacuum value at the stationary point of the Lagrangian. (In his book, Mr. Roe never held that μ2< 0). This makes the stationary point of the Lagrangian undefined and basically implies that the expectation values of the vacuum are also imaginary. These being undefined and unreal, thus unbound, Mr. Weinberg is free to take any steps in his “mathematics”. He can do anything he wants to. He therefore juggles the “equalities” a bit more until he can get his vacuum value to slide into his boson mass. He does this very ham handedly, since his huge Lagrangian quickly simplifies to W = vg/2, where v is the vacuum expectation value. It may be remembered that g in weak theory is 0.65, so that the boson mass is nearly ⅔v.
Mr. Weinberg does play some tricks here, though he hides his tricks a bit better than Mr. Roe. Mr. Roe gives up on the mathematics and just assigns his field mass to his bosons. Weinberg skips the field mass and gives his vacuum energy right to his boson, with no intermediate steps except going imaginary. Mr. Weinberg tries to imply that his gauged mathematics is giving him the positive expectation value, but it isn’t. Rather, he has cleverly found a weak point in his mathematics where he can choose whatever value he needs for his vacuum input, and then transfers that energy right into his bosons.
What is the force of the weak force? In section 7.2 of his book, Mr. Roe says that “The energies involved in beta decay are a few MeV, much smaller than the 80 GeV of the W intermediate boson.” But by this he only means that the electrons emitted have kinetic energies in that range. This means that, as a matter of energy, the W doesn’t really involve itself in the decay. Just from looking at the energy involved, no one would have thought it required the mediation of such a big particle. Then why did Mr. Weinberg think it necessary to borrow 247 GeV from the vacuum to explain this interaction? Couldn’t he have borrowed a far smaller amount? The answer to this is that by 1968, most of the smaller mesons had already been discovered. It therefore would have been foolhardy to predict a weak boson with a weight capable of being discovered in the accelerators of the time. The particles that existed had already been discovered, and the only hope was to predict a heavy particle just beyond the current limits. This is why the W had to be so heavy. It was a brilliant bet, and it paid off.
Now, let us examine the Lorentz force law in the light of the above discussion. Since the theory is based on electrons, let us first examine what is an electron! This question is still unanswered, even though everything else about the electron, what it does, how it behaves, etc., is common knowledge.
From the time electrons were first discovered, charged particles like the protons and electrons have been arbitrarily assigned plus or minus signs to indicate potential, but no real mechanism or field has ever been seriously proposed. According to the electro-weak theory, the current carrier of charge is the messenger photon. But this photon is a virtual particle. It does not exist in the field. It has no mass, no dimension, and no energy. In electro-weak theory, there is no mathematics to show a real field. The virtual field has no mass and no energy. It is not really a field, as a continuous field can exist between two boundaries that are discrete. A stationary boat in deep ocean in a calm and cloudy night does not feel any force by itself. It can only feel the forces with reference to another body (including the dynamics of the ocean) or land or sky. With no field to explain the atomic bonding, early particle physicists had to explain the bond with the electrons. Till now, the nucleus is not fully understood. Thus the bonding continues to be assigned to the electrons. But is the theory correct?
The formation of an ionic bond proceeds when the cation, whose ionization energy is low, releases some of its electrons to achieve a stable electron configuration. But the ionic bond is used to explain the bonding of atoms and not ions. For instance, in the case of NaCl, it is a Sodium atom that loses an electron to become a Sodium cation. Since the Sodium atom is already stable, why should it need to release any of its electrons to achieve a “stable configuration” that makes it unstable? What causes it to drop an electron in the presence of Chlorine? There is no answer. The problem becomes even bigger when we examine it from the perspective of Chlorine. Why should Chlorine behave differently? Instead of dropping an electron to become an ion, Chlorine adds electrons. Since as an atom Chlorine is stable, why should it want to borrow an electron from Sodium to become unstable? In fact, Chlorine cannot “want” an extra electron, because that would amount to a stable atom “wanting” to be unstable. Once Sodium becomes a cation, it should attract a free electron, not Chlorine. So there is no reason for Sodium to start releasing electrons. There is no reason for a free electron to move from a cation to a stable atom like chlorine. But there are lots of reasons for Sodium not to release electrons. Free electrons do not move from cations to stable atoms.
This contradiction is sought to be explained by “electron affinity”. The electron affinity of an atom or molecule is defined as the amount of energy released when an electron is added to a neutral atom or molecule to form a negative ion. Here affinity has been defined by release of energy, which is an effect and not the cause! It is said that Ionic bonding will occur only if the overall energy change for the reaction is exothermic. This implies that the atoms tend to release energy. But why should they behave like that? The present theory only tells that there is release of energy during the bonding. But that energy could be released in any number of mechanical scenarios and not necessarily due to electron-affinity alone. Physicists have no answer for this.
It is said that all elements tend to become noble gases, so that they gain or lose electrons to achieve this. But there is no evidence for it. If this logic is accepted, then Chlorine should wants another electron to be more like Argon. Hence it really should want another proton, because another electron won’t make Chlorine into Argon. It will only make Chlorine an ion, which is unstable. Elements do not destabilize themselves to become ions. On the other hand, ions take on electrons to become atoms. It is the ions that want to be atoms, not the reverse. If there is any affinity, it is for having the same number of electrons and protons. Suicide is a misplaced human tendency – not an atomic tendency. Atoms have no affinity for becoming ions. The theory of ionic bonding suggests that the anion (an ion that is attracted to the anode during electrolysis), whose electron affinity is positive, accepts the electrons with a negative sign to attain a stable electronic configuration! And so are electrons! And no body pointed out such a contradiction! Elements do not gain or lose electrons; they confine and balance the charge field around them, to gain even more nuclear stability.
Current theory tells only that atoms should have a different electronegativity to bond without explaining the cause for such action. Electronegativity cannot be measured directly. Given the current theory, it also does not follow any logical pattern on the Periodic Table. It generally runs from a low to a peak across the table with many exceptions (Hydrogen, Zinc, Cadmium, Terbium, Ytterbium, and the entire 6th period, etc). To calculate Pauling electronegativity for an element, it is necessary to have the data on the dissociation energies of at least two types of covalent bonds formed by that element. That is a post hoc definition. In other words, the data has been used to formulate the “mathematics”. The mathematics has no predictive qualities. It has no theoretical or mechanical foundation. Before we define electronegativity, let us define what is an electron. We will first explain the basic concept before giving practical examples to prove the concepts.
Since the effect of force on a body sometimes appears as action at a distance and since all action at a distance can only be explained by the introduction of a field, we will first consider fields to explain these. If there is only one body in a field, it reaches an equilibrium position with respect to that field. Hence, the body does not feel any force. Only when another body enters the field, the interaction with it affects the field, which is felt by both bodies. Hence any interaction, to be felt, must contain at least two bodies separated by a field. Thus, all interactions are three-fold structures (one referral or relatively central structure, the other peripheral; both separated by the field - we call it tribrit). All bodies that take part in interactions are also three-fold structures, as otherwise there would not be a net charge for interaction with other bodies or the field. Only in this way we can explain the effect of one body on the other in a field. It may be noted that particles with electric charge create electric fields that flow from higher concentration to lower concentration. When the charged bodies are in motion, they generate a magnetic field that closes in on itself. This motion is akin to that of a boat flowing from high altitude to low altitude with river current and creating a bow-shock effect in a direction perpendicular to the direction of motion of the boat that closes in due to interaction with static water.
All particles or bodies are discrete structures that are confined within their dimension which differentiates their “inner space” from their “outer space”. The “back ground structure” or the “ground” on which they are positioned is the field. The boundaries between particles and fields are demarcated by compact density variations. But what happens when there is uniform density between the particles and the field – where the particle melts into the field? The state is singular, indistinguishable in localities, uncommon and unusual from our experience making it un-describable – thus, un-knowable. We call this state of uniform density (sama rasa) singularity (pralaya – literally meaning approaching ultimate dissolution). We do not accept that singularity is a point or region in spacetime in which gravitational forces cause matter to have an infinite density – where the gravitational tides diverge – because gravitational tides have never been observed. We do not accept that singularity is a condition when equations do not give a valid value, and can sometimes be avoided by using a different coordinate system, because we have shown that division by zero leaves the number unchanged and renormalization is illegitimate mathematics. Yet, in that state there can be no numbers, hence no equations. We do not accept that events beyond the Singularity will be stranger than science fiction, because at singularity, there cannot be any “events”.
Some physicists have modeled a state of quantum gravity beyond singularity and call it the “big bounce”. Though we do not accept their derivation and their ”mathematics”, we agree in general with the description of the big bounce. They have interpreted it as evidence for colliding galaxies. We refer to that state as the true “collapse” and its aftermath. Law of conservation demands that for every displacement caused by a force, there must be generated an equal and opposite displacement. Since application of force leads to inertia, for every inertia of motion, there must be an equivalent inertia of restoration. Applying this principle to the second law of thermodynamics, we reach a state, where the structure formation caused by differential density dissolves into a state of uniform density – not degenerates to the state of maximum entropy. We call that state singularity. Since at that stage there is no differentiation between the state of one point and any other point, there cannot be any perception, observer or observable. There cannot be any action, number or time. Even the concept of space comes to an end as there are no discernible objects that can be identified and their interval described. Since this distribution leaves the largest remaining uncertainty, (consistent with the constraints for observation), this is the true state of maximum entropy. It is not a state of “heat death” or “state of infinite chaos”, because it is a state mediated by negative energy.
Viewed from this light, we define objects into two categories: macro objects that are directly perceptible (bhaava pratyaya) and quantum or micro objects that are indirectly perceptible through some mechanism (upaaya pratyaya). The second category is further divided into two categories: those that have differential density that makes them perceptible indirectly through their effects (devaah) and those that form a part of the primordial uniform density (prakriti layaah) making them indiscernible. These are like the positive and negative energy states respectively but not exactly like those described by quantum physics. This process is also akin to the creation and annihilation of virtual particles though it involves real particles only. We describe the first two states of the objects and their intermediate state as “dhruva, dharuna and dhartra” respectively.
When the universe reaches a state of singularity as described above, it is dominated by the inertia of restoration. The singular state (sama rasa) implies that there is equilibrium everywhere. This equilibrium can be thought of in two ways: universal equilibrium and local equilibrium. The latter implies that every point is in equilibrium. Both the inertia of motion and inertia of restoration cannot absolutely cancel each other. Because, in that event the present state could not have been reached as no action ever would have started. Thus, it is reasonable to believe that there is a mismatch (kimchit shesha) between the two, which causes the inherent instability (sishrhkshaa) at some point. Inertia of motion can be thought of as negative inertia of restoration and vice versa. When the singularity approaches, this inherent instability causes the negative inertia of restoration to break the equilibrium. This generates inertia of motion in the uniformly dense medium that breaks the equilibrium over a large area. This is the single and primary force that gives rise to other secondary and tertiary etc, forces.
This interaction leads to a chain reaction of breaking the equilibrium at every point over a large segment resembling spontaneous symmetry breaking and density fluctuations followed by the bow-shock effect. Thus, the inertia of motion diminishes and ultimately ceases at some point in a spherical structure. We call the circumference of this sphere “naimisha” - literally meaning controller of the circumference. Since this action measures off a certain volume from the infinite expanse of uniform density, the force that causes it is called “maayaa”, which literally means “that by which (everything is) scaled”. Before this force operated, the state inside the volume was the same as the state outside the volume. But once this force operates, the density distribution inside both become totally different. While the outside continues to be in the state of singularity, the inside is chaotic. While at one level inertia of motion pushes ahead towards the boundary, it is countered by the inertia of restoration causing non-linear interaction leading to density fluctuation. We call the inside stuff that cannot be physically described, as “rayi” and the force associate with it “praana” – which literally means source of all displacements. All other forces are variants of this force. As can be seen, “praana” has two components revealed as inertia of motion and inertia of restoration, which is similar in magnitude to inertia of motion in the reverse direction from the center of mass. We call this second force as “apaana”. The displacements caused by these forces are unidirectional. Hence in isolation, they are not able to form structures. Structure formation begins when both operate on “rayi” at a single point. This creates an equilibrium point (we call it vyaana) around which the surrounding “rayi” accumulate. We call this mechanism “bhuti” implying accumulation in great numbers.
When “bhuti” operates on “rayi”, it causes density variation at different points leading to structure formation through layered structures that leads to confinement. Confinement increases temperature. This creates pressure on the boundary leading to operation of inertia of restoration that tries to confine the expansion. Thus, these are not always stable structures. Stability can be achieved only through equilibrium. But this is a different type of equilibrium. When inertia of restoration dominates over a relatively small area, it gives a stable structure. This is one type of confinement that leads to the generation of the strong, weak, electro-magnetic interactions and radioactivity. Together we call these as “Yagnya” which literally means coupling (samgati karane). Over large areas, the distribution of such stable structures can also bring in equilibrium equal to the primordial uniform density. This causes the bodies to remain attached to each other from a distance through the field. We call this force “sootra”, which literally means string. This causes the gravitational interaction. Hence it is related to mass and inversely to distance. In gravitational interaction, one body does not hold the other, but the two bodies revolve around their barycenter.
When “Yagnya” operates at negative potential, i.e., “apaana” dominates over “rayi”, it causes what is known as the strong nuclear interaction, which is confined within the positively charged nucleus. Outside the confinement there is a deficiency of negative charge, which is revealed as the positive charge. We call this force “jaayaa”, literally meaning that which creates all particles. This force acts in 13 different ways to create all elementary particles (we are not discussing it now). But when “Yagnya” operates at positive potential, i.e., “praana” dominates over “rayi”, it causes what is known as the weak nuclear interaction. Outside the confinement there is a deficiency of positive charge, which is revealed as a negative charge. This negative charge component searches for complimentary charge to attain equilibrium. This was reflected in the Gargamelle bubble chamber, which photographed the tracks of a few electrons suddenly starting to move. This has been described as the W boson. We call this mechanism “dhaaraa” – literally meaning sequential flow, since it starts a sequence of actions with corresponding reactions (the so-called W+ and W- and Z bosons).
Till this time, there is no structure: it is only density fluctuation. When the above reactions try to shift the relatively denser medium, the inertia of restoration is generated and tries to balance between the two opposite reactions. This appears as charge (lingam), because in its interaction with others, it either tries to push them away (positive charge – pum linga) or confine them (negative charge – stree linga). Since this belongs to a different type of reaction, the force associated with it is called “aapah”. When the three forces of “jaayaa”, “dhaaraa” and “aapah” act together, it leads to electromagnetic interaction (ap). Thus, electromagnetic interaction is not a separate force, but only accumulation of the other forces. Generally, an electric field is so modeled that it is directed away from a positive electric charge and towards a negative electric charge that generated the field. Another negative electric charge inside the generated electric field would experience an electric force in the opposite direction of the electric field, regardless of whether the field is generated by a positive or negative charge. A positive electric charge in the generated electric field will experience an electric force in the same direction as the electric field. This shows that the inherent characteristic of a positive charge is to push away from the center to the periphery. We call this characteristic “prasava”. The inherent characteristic of a negative charge is to confine positive charge. We call this characteristic “samstyaana”.
Since electric current behaves in a bipolar way, i.e., stretching out, whereas magnetic flow always closes in, there must be two different sources of their origin and they must have been coupled with some other force. This is the physical explanation of electro-magnetic forces. Depending upon temperature gradient, we classify the electrical component into four categories (sitaa, peeta, kapilaa, ati-lohitaa) and the magnetic forces into four corresponding categories (bhraamaka, swedaka, draavaka, chumbaka).
While explaining uncertainty, we had shown that if we want to get any information about a body, we must either send some perturbation towards it to rebound or measure the incoming radiation emitted by it through the intervening field, where it gets modified. We had also shown that for every force applied (energy released), there is an equivalent force released in the opposite direction (corrected version of Mr. Newton’s third law). Let us take a macro example first. Planets move more or less in the same plane around the Sun like boats float on the same plane in a river (which can be treated as a field). The river water is not static. It flows in a specific rhythm like the space weather. When a boat passes, there is a bow shock effect in water in front of the boat and the rhythm is temporarily changed till reconnection of the resultant wave. The water is displaced in a direction perpendicular to the motion of the boat. However, the displaced water is pushed back by the water surrounding it due to inertia of restoration. Thus, it moves backwards of the boat charting in a curve. Maximum displacement of the curve is at the middle of the boat.
We can describe this as if the boat is pushing the water away, while the water is trying to confine the boat. The interaction will depend on the mass and volume (that determines relative density) and the speed of the boat on the one hand and the density and velocity of the river flow on the other. These two can be described as the potentials for interaction (we call it saamarthya) of the boat and the river respectively. The potential that starts the interaction first by pushing the other is called the positive potential and the other that responds to this is called the negative potential. Together they are called charge (we call it lingam). When the potential leads to push the field, it is the positive charge. The potential that confines the positive charge is negative charge. In an atom, this negative potential is called an electron. The basic cause for such potential is instability of equilibrium due to the internal effect of a confined body. Their position depends upon the magnitude of the instability, which explains the electron affinity also. The consequent reaction is electronegativity.
The Solar system is inside a big bubble, which forms a part of its heliosphere. The planets are within this bubble. The planets are individually tied to the Sun through gravitational interaction. They also interact with each other. In the boat example, the river flows within two boundaries and the riverbed affects its flow. The boat acts with a positive potential. The river acts with a negative potential. In the Solar system, the Sun acts with a positive potential. The heliosphere acts with a negative potential. In an atom, the protons act with a positive potential. The electrons act with a negative potential.
While discussing Coulomb’s law we have shown that interaction between two positive charges leads to explosive results. Thus, part of the energy of the protons explode like solar flares and try to move out in different directions, which are moderated by the neutrons in the nucleus and electron orbits in the boundary. The point where the exploding radiation stops at the boundary makes an impact on the boundary and becomes perceptible. This is called the electron. Since the exploding radiation returns from there towards the nucleus, it is said to have a negative potential. The number of protons determine the number of explosions – hence the number of boundary electrons. Each explosion in one direction is matched by another equivalent disturbance in the opposite direction. This determines the number of electrons in the orbital. The neutrons are like planets in the solar system. This is confined by the negative potential of the giant bubble in the Solar system, which is the equivalent of electron orbits in atoms. Since the flares appear at random directions, the position of the electron cannot be precisely determined. In the boat example, the riverbed acts like the neutrons. The extra-nuclear field of the atom is like the giant bubble. The water near the boat that is most disturbed acts similarly. The totality of electron orbits is like the heliosphere. The river boundaries act similarly.
The electrons have no fixed position until one looks at it and the wave function collapses (energy released). However, if one plots the various positions of the electron after a large number of measurements, eventually one will get a ghostly orbit-like pattern. The pattern of the orbit appears as depicted below. This proves the above view.
The atomic radius is a term used to describe the size of the atom, but there is no standard definition for this value. Atomic radius may refer to the ionic radius, covalent radius, metallic radius, or van der Waals radius. In all cases, the size of the atom is dependent on how far out the electrons extend. Thus, electrons can be described as the outer boundary of the atom that confines the atom. It is like the “heliopause” of the solar system, which confines the solar system and differentiates it from the inter-stellar space. There are well defined planetary orbits (like the electron shell), which lack a physical description except for the backdrop of the solar system. These are like the electron shells. This similarity is only partial, as each atomic orbital admits up to two otherwise identical electrons with opposite spin, but planets have no such companion (though the libration points 1 and 2 or 4 and 5 can be thought of for comparison). The reason for this difference is the nature of mass difference (volume and density) dominating in the two systems.
Charge neutral gravitational force that arises from the center of mass (we call it Hridayam), stabilizes the inner (Sun-ward or nuclear-ward) space between the Sun and the planet and nucleus and the electron shells. The charged electric and magnetic fields dominates the field (from the center to the boundary) and confine and stabilize the inter-planetary field or the extra-nuclear field (we call it “Sootraatmaa”, which literally means “self-sustained entangled strings”). While in the case of Sun-planet system, most of the mass is concentrated at the center as one body, in the case of nucleus, protons and neutrons with comparable masses interact with each other destabilizing the system continuously. This affects the electron arrangement. The mechanism (we call it “Bhuti”), the cause and the macro manifestation of these forces and spin will be discussed separately.
We have discussed the electroweak theory earlier. Here it would suffice to say that electrons are nothing but outer boundaries of the extra nuclear space and like the planetary orbits, have no physical existence. We may locate the planet, but not its orbit. If we mark one segment of the notional orbit and keep a watch, the planet will appear there periodically, but not always. However, there is a difference between the two examples as planets are like neutrons. It is well known that the solar wind originates from the Sun and travels in all directions at great velocities towards the interstellar space. As it travels, it slows down after interaction with the inter-planetary medium. The planets are positioned at specific orbits balanced by the solar wind, the average density gradient of various points within the Solar system and the average velocity of the planet besides another force that will be discussed while analyzing Coulomb’s law.
We cannot measure both the position and momentum of the electron simultaneously. Each electron shell is tied to the nucleus individually like planets around the Sun. This is proved from the Lamb shift and the over-lapping of different energy levels. The shells are entangled with the nucleus like the planets are not only gravitationally entangled with the Sun, but also with each other. We call this mechanism “chhanda”, which literally means entanglement.
Quantum theory now has 12 gauge bosons, only three of which are known to exist, and only one of which has been well-linked to the electroweak theory. The eight gluons are completely theoretical, and only fill slots in the gauge theory. But we have a different explanation for these. We call these eight as “Vasu”, which literally means “that which constitutes everything”. Interaction requires at least two different units, each of these could interact with the other seven. Thus, we have seven types of “chhandas”. Of these, only three (maa, pramaa, pratimaa) are involved in fixed dimension (dhruva), fluid dimension (dhartra) and dimension-less particles (dharuna). The primary difference between these bodies relate to density, (apaam pushpam) which affects and is affected by volume. A fourth “chhanda” (asreevaya) is related to the confining fields (aapaam). We will discuss these separately.
We can now review the results of the double slit experiment and the diffraction experiment in the light of the above discussion. Let us take a macro example first. Planets move more or less in the same plane around the Sun like boats float on the same plane in a river (which can be treated as a field). The river water is not static. It flows in a specific rhythm like the space weather. After a boat passes, there is bow shock effect in the water and the rhythm is temporarily changed till reconnection. The planetary orbits behave in a similar way. The solar wind also behaves with the magnetosphere of planets in a similar way. If we take two narrow angles and keep a watch for planets moving past those angles, we will find a particular pattern of planetary movement. If we could measure the changes in the field of the Solar system at those points, we will also note a fixed pattern. It is like boats crossing a bridge with two channels underneath. We may watch the boats passing through a specific channel and the wrinkled surface of water. As the boats approach the channels, a compressed wave precedes each boat. This wave will travel through both channels. However, if the boats are directed towards one particular channel, then the wave will proceed mostly through that channel. The effect on the other channel will be almost nil showing fixed bands on the surface of water. If the boats are allowed to move unobserved, they will float through either of the channels and each channel will have a 50% chance of the boat passing through it. Thus, the corresponding waves will show interference pattern.
Something similar happens in the case of electrons and photons. The so-called photon has zero rest mass. Thus, it cannot displace any massive particles, but flows through the particles imparting only its energy to them. The space between the emitter and the slits is not empty. Thus, the movement of the mass-less photon generates similar reaction like the boats through the channels. Since the light pulse spherically spreads out in all directions, it behaves like a water sprinkler. This creates the wave pattern as explained below:
Let us consider a water sprinkler in the garden gushing out water. Though the water is primarily forced out by one force, other secondary forces come into play immediately. One is the inertia of motion of the particles pushed out. The second is the interaction between particles that are in different states of motion due to such interactions with other particles. What we see is the totality of such interactions with components of the stream gushing out at different velocities in the same general direction (not in the identical direction, but in a narrow band). If the stream of gushing out water falls on a stationary globe which stops the energy of the gushing out water, the globe will rotate. It is because the force is not enough to displace the globe from its position completely, but only partially displaces its surface, which rotates it on the fixed axis.
Something similar happens when the energy flows generating a bunch of radiations of different wave lengths. If it cannot displace the particle completely, the particle rotates at its position, so that the energy “slips out” by it moving tangentially. Alternatively, the energy moves one particle that hits the next particle. Since energy always moves objects tangentially, when the energy flows by the particle, the particle is temporarily displaced. It regains its position due to inertia of restoration – elasticity of the medium - when other particles push it back. Thus, the momentum only is transferred to the next particle giving the energy flow a wave shape as shown below.
The diffraction experiment can be compared to the boats being divided to pass in equal numbers through both channels. The result would be same. It will show interference pattern. Since the electron that confines positive charge behaves like the photon, it should be mass-less.
It may be noted that the motion of the wave is always within a narrow band and is directed towards the central line, which is the equilibrium position. This implies that there is a force propelling it towards the central line. We call this force inertia of restoration (sthitisthaapaka samskaara), which is akin to elasticity. The bow-shock effect is a result of this inertia. But after reaching the central line, it over-shoots due to inertia of motion. The reason for the same is that, systems are probabilistically almost always close to equilibrium. But transient fluctuations to non-equilibrium states could be expected due to inequitable energy distribution in the system and its environment independently and collectively. Once in a non-equilibrium state, it is highly likely that both after and before that state, the system was closer to equilibrium. All such fluctuations are confined within a boundary. The electron provides this boundary. The exact position of the particle cannot be predicted as it is perpetually in motion. But it is somewhere within that boundary only. This is the probability distribution of the particle. It may be noted that the particle is at one point within this band at any given time and not smeared out in all points. However, because of its mobility, it has the possibility of covering the entire space at some time or the other. Since the position of the particle could not be determined in one reading, a large number of readings are taken. This is bound to give a composite result. But this doesn’t imply that such readings represent the position of the particle at any specific moment or at all times before measurement.
The “boundary conditions” can be satisfied by many different waves (called harmonics – we call it chhanda) if each of those waves has a position of zero displacement at the right place. These positions where the value of the wave is zero are called nodes. (Sometimes two types of waves - traveling waves and standing waves - are distinguished by whether the nodes of the wave move or not.) If electrons behave like waves, then the wavelength of the electron must “fit” into any orbit that it makes around the nucleus in an atom. This is the “boundary condition” for a one electron atom. Orbits that do not have the electron’s wavelength “fit” are not possible, because wave interference will rapidly destroy the wave amplitude and the electron would not exist anymore. This “interference” effect leads to discrete (quantized) energy levels for the atom. Since light interacts with the atom by causing a transition between these levels, the color (spectrum) of the atom is observed to be a series of sharp lines. This is precisely the pattern of energy levels that are observed to exist in the Hydrogen atom. Transitions between these levels give the pattern in the absorption or emission spectrum of the atom.
In view of the above discussion, the Lorentz force law becomes simple. Since division by zero leaves the quantity unchanged, the equation remains valid and does not become infinite for point particles. The equation shows mass-energy requirement for a system to achieve the desired charge density. But what about the radius “a” for the point electron and the 2/3 factors in the equation:
The simplest explanation for this is that no one has measured the mass or radius of the electron, though its charge has been measured. This has been divided by c2 to get the hypothetical mass. As explained above, this mass is not the mass of the electron, but the required mass to achieve charge density equal to that of an electron shell, which is different from that of the nucleus and the extra-nucleic field like the heliosheath that is the dividing line between the heliosphere and the inter-stellar space. Just like solar radiation rebounds from termination shock, emissions from the proton rebound from the electron shell, that is akin to the stagnation region of the solar system.
Voyager 1 spacecraft is now in a stagnation region in the outermost layer of the bubble around our solar system – called termination shock. Data obtained from Voyager over the last year reveal the region near the termination shock to be a kind of cosmic purgatory. In it, the wind of charged particles streaming out from our sun has calmed, our solar system’s magnetic field is piled up, and higher-energy particles from inside our solar system appear to be leaking out into interstellar space. Scientists previously reported the outward speed of the solar wind had diminished to zero marking a thick, previously unpredicted “transition zone” at the edge of our solar system. During this past year, Voyager’s magnetometer also detected a doubling in the intensity of the magnetic field in the stagnation region. Like cars piling up at a clogged freeway off-ramp, the increased intensity of the magnetic field shows that inward pressure from interstellar space is compacting it. At the same time, Voyager has detected a 100-fold increase in the intensity of high-energy electrons from elsewhere in the galaxy diffusing into our solar system from outside, which is another indication of the approaching boundary.
This is exactly what is happening at the atomic level. The electron is like the termination shock at heliosheath that encompasses the “giant bubble” encompassing the Solar system, which is the equivalent of the extra-nuclear space. The electron shells are like the stagnation region that stretches between the giant bubble and the inter-stellar space. Thus, the radius a in the Lorentz force law is that of the associated nucleus and not that of the electron. The back reaction is the confining magnetic pressure of the electron on the extra-nucleic field. The factor 2/3 is related to the extra-nucleic field, which contributes to the Hamiltonian HI. The balance 1/3 is related to the nucleus, which contributes to the Hamiltonian HA. We call this concept “Tricha saama”, which literally means “tripled radiation field”. We have theoretically derived the value of π from this principle. The effect of the electron that is felt outside - like the bow shock effect of the Solar system - is the radiation effect, which contributes to the Hamiltonian HR. To understand physical implication of this concept, let us consider the nature of perception.
Before we discuss perception of bare charge and bare mass, let us discuss about the modern notion of albedo. Albedo is commonly used to describe the overall average reflection coefficient of an object. It is the fraction of solar energy (shortwave radiation) reflected from the Earth or other objects back into space. It is a measure of the reflectivity of the earth’s surface. It is a non-dimensional, unit-less quantity that indicates how well a surface reflects solar energy. Albedo (α) varies between 0 and 1. A value of 0 means the surface is a “perfect absorber” that absorbs all incoming energy. A value of 1 means the surface is a “perfect reflector” that reflects all incoming energy. Albedo generally applies to visible light, although it may involve some of the infrared region of the electromagnetic spectrum.
Neutron albedo is the probability under specified conditions that a neutron entering into a region through a surface will return through that surface. Day-to-day variations of cosmic-ray-produced neutron fluxes near the earth’s ground surface are measured by using three sets of paraffin-moderated BF3 counters, which are installed in different locations, 3 m above ground, ground level, and 20 cm under ground. Neutron flux decreases observed by these counters when snow cover exists show that there are upward-moving neutrons, that is, ground albedo neutron near the ground surface. The amount of albedo neutrons is estimated to be about 40 percent of total neutron flux in the energy range 1-10 to the 6th eV.
Albedos are of two types: “bond albedo” (measuring total proportion of electromagnetic energy reflected) and “geometric albedo” (measuring brightness when illumination comes from directly behind the observer). The geometric albedo is defined as the amount of radiation relative to that from a flat Lambert surface which is an ideal reflector at all wavelengths. It scatters light isotropically - in other words, an equal intensity of light is scattered in all directions; it doesn’t matter whether you measure it from directly above the surface or off to the side. The photometer will give you the same reading. The bond albedo is the total radiation reflected from an object compared to the total incident radiation from the Sun. The study of albedos, their dependence on wavelength, lighting angle (“phase angle”), and variation in time comprises a major part of the astronomical field of photometry.
The albedo of an object determines its visual brightness when viewed with reflected light. A typical geometric ocean albedo is approximately 0.06, while bare sea ice varies from approximately 0.5 to 0.7. Snow has an even higher albedo at 0.9. It is about 0.04 for charcoal. There cannot be any geometric albedo for gaseous bodies. The albedos of planets are tabulated below:
Geometric Albedo
Bond Albedo
0.343 +/-0.032
The above table shows some surprises. Generally, change in the albedo is related to temperature difference. In that case, it should not be almost equal for both Mercury, which is a hot planet being nearer to the Sun, and the Moon, which is a cold satellite much farther from the Sun. In the case of Moon, it is believed that the low albedo is caused by the very porous first few millimeters of the lunar regolith. Sunlight can penetrate the surface and illuminate subsurface grains, the scattered light from which can make its way back out in any direction. At full phase, all such grains cover their own shadows; the dark shadows being covered by bright grains, the surface is brighter than normal. (The perfectly full moon is never visible from Earth. At such times, the moon is eclipsed. From the Apollo missions, we know that the exact sub-solar point - in effect, the fullest possible moon - is some 30% brighter than the fullest moon seen from earth. It is thought that this is caused by glass beads formed by impact in the lunar regolith, which tend to reflect light in the direction from which it comes. This light is therefore reflected back toward the sun, bypassing earth).
The above discussion shows that the present understanding of albedo may not be correct. Ice and snow, which are very cold, show much higher albedo than ocean water. But both Mercury and Moon show almost similar albedo even though they have much wide temperature variations. Similarly, if porosity is a criterion, ice occupies more volume than water, hence more porous. Then why should ice show more albedo than water. Why should Moon’s albedo be equal to that of Mercury, whose surface appears metallic, whereas the Moon’s surface soil is brittle. The reason is, if we heat up lunar soil, it will look metallic like Mercury. In other words, geologically, both Moon and Mercury belong to the same class as if they share the same DNA. For this reason, we generally refer to Mercury as the off-spring of Moon. The concept of albedo does not take into account the bodies that emit radiation.
We can see objects using solar or lunar radiation. But till it interacts with a body, we cannot see the incoming radiation. We see only the reflective radiation – the radiation that is reflected after interacting with the field set up by our eyes. Yet, we can see both the Sun and the Moon that emit these radiations. Based on this characteristic, the objects are divided into three categories:
• Radiation that shows self-luminous bodies like stars as well as other similar bodies (we call it swa-jyoti). The radiation itself has no colors – not perceptible to the eye. Thus, outer space is only black or white.
• Reflected colorless radiation like that of Moon that shows not only the emission from reflecting bodies (not the bodies themselves), but also other bodies (para jyoti), and
• Reflecting bodies that show a sharp change in the planet’s reflectivity as a function of wavelength (which would occur if it had vegetation similar to that on Earth) that show themselves in different colors (roopa jyoti). Light that has reflected from a planet like Earth is polarized, whereas light from a star is normally unpolarized.
• Non-reflecting bodies that do not radiate (ajyoti). These are dark bodies.
Of these, the last category has 99 varieties including black holes and neutron stars.
Before we discuss about dark matter and dark energy, let us discuss some more aspects about the nature of radiation. X-ray emissions are treated as a signature of the Black-holes. Similarly, Gamma ray bursts are also keenly watched by Astronomers. Gamma rays and x-rays are clubbed together at the lower end of the electromagnetic radiation spectrum. However, in spite of some similarities, the origin of both shows a significant difference. While x-rays originate from the electron shell region, gamma rays originate from the region deep down the nucleus. We call such emissions “pravargya”.
There is much misinformation, speculation and sensationalization relating to Black holes like the statement: “looking ahead inside a Black hole, you will see the back of your head”. Central to the present concept of Black holes is the singularities that arise as a mathematical outcome of General Relativity. The modern concept of singularity does not create a “hole”. It causes all mass to collapse to a single “point”, which in effect closes any “holes” that may exist. A hole has volume and by definition, the modern version of singularity has no volume. Thus, it is the opposite concept of a hole. We have shown that the basic postulates of GR including the equivalence principle are erroneous. We have also shown that division by zero leaves a number unchanged. The zero-dimensional point cannot enter any equations defined by cardinal or counting numbers, which have extensions – hence represent dimensions. Since all “higher mathematics” is founded on differential equations, there is a need to re-look at the basic concepts of Black holes.
Mr. Einstein had tried to express GR in terms of the motion of “mass points” in four dimensional space. But “mass points” is an oxymoron. Mass always has dimension (the terms like super-massive black hole prove this). A point, by definition has no dimension. Points cannot exist in equations because equations show changes in the output when any of the parameters in the input is changed. But there cannot be any change in the point except its position with reference to an origin, which depicts length only. What GR requires is a sort of renormalization, because the concept has been de-normalized first. One must consider the “field strength”. But the lack of complete field strength is caused by trying to do after-the-fact forced fudging of equations to contain entities such as points that they cannot logically contain. The other misguiding factor is the concept of “messenger particles” that was introduced to explain the “attractive force”.
The mathematics of General Relativity should be based on a constant differential that is not zero and seek the motion of some given mass or volume. This mass or volume may be as small as we like, but it cannot be zero. This causes several fundamental and far-reaching changes to the mathematics of GR, but the first of these changes is of course the elimination of singularity from all solutions. Therefore the central “fact” of the black hole must be given up. Whatever may be at the center of a black hole, it cannot be a “singularity”.
Mr. Chandrasekhar used Mr. Einstein’s field equations to calculate densities and accelerations inside a collapsing superstar. His mathematics suggested the singularity at the center, as well as other characteristics that are still accepted as defining the black hole. Mr. Einstein himself contradicted Mr. Chandrasekhar’s conclusions. Apart from using mass points in GR, Mr. Einstein made several other basic errors that even Mr. Chandrasekhar did not correct and is still being continued. One such error is the use of the term γ, which, as has been explained earlier, really does not change anything except perception of the object by different observers unrelated to the time evolution of the object proper. Hence it cannot be treated as actually affecting the time-evolution of the object. Yet, in GR, it affects both “x” and “t” transformations. In some experimental situations γ is nearly correct. But in a majority of situations, γ fails, sometimes very badly. Also γ is the main term in the mass increase equation. To calculate volumes or densities in a field, one must calculate both radius (length) and mass; and the term comes into play in both.
Yet, Mr. Einstein had wrongly assigned several length and time variables in SR, giving them to the wrong coordinate systems or to no specific coordinate systems. He skipped an entire coordinate system, achieving two degrees of relativity when he thought he had only achieved one. Because his x and t transforms were compromised, his velocity transform was also compromised. He carried this error into the mass transforms, which infected them as well. This problem then infected the tensor calculus and GR. This explains the various anomalies and variations and the so-called violations within Relativity. Since Mr. Einstein’s field equations are not correct, Mr. Schwarzschild’s solution of 1917 is not correct. Mr. Israel’s non-rotating solution is not correct. Mr. Kerr’s rotating solution is not correct. And the solutions of Messers Penrose, Wheeler, Hawking, Carter, and Robinson are not correct.
Let us take just one example. The black hole equations are directly derived from GR - a theory that stipulates that nothing can equal or exceed the speed of light. Yet the centripetal acceleration of the black hole must equal or exceed the speed of light in order to overcome it. In that case, all matter falling into a black hole would instantaneously achieve infinite mass. It is not clear how bits of infinite mass can be collected into a finite volume, increase in density and then disappear into a singularity. In other words, the assumptions and the mathematics that led to the theory of the black hole do not work inside the created field. The exotic concepts like wormholes, tachyons, virtual particle pairs, quantum leaps and non-linear i-trajectories at 11-dimensional boson-massed fields in parallel universes, etc, cannot avoid this central contradiction. It is not the laws of physics that breaks down inside a black hole. It is the mathematics and the postulates of Relativity that break down.
It is wrongly assumed that matter that enters a black hole escapes from our universe. Mass cannot exist without dimension. Even energy must have differential fluid dimension; otherwise its effect cannot be experienced differently from others. Since the universe is massive, it must have dimensions – inner space as differentiated from outer space. Thus, the universe must be closed. The concept of expanding universe proves it. It must be expanding into something. Dimension cannot be violated without external force. If there is external force, then it will be chaotic and no structure can be formed, as closed structure formation is possible only in a closed universe. From atoms to planets to stars to galaxies, etc., closed structures go on. With limited time and technology, we cannot reach the end of the universe. Yet, logically like the atoms, the planets, the stars, the galaxies, etc, it must be a closed one – hence matter cannot escape from our universe. Similarly, we cannot enter another universe through the black hole or singularity. If anything, it prevents us from doing so, as anything that falls into a black hole remains trapped there. Thus the concept of white holes or pathways to other dimensions, universes, or fields is a myth. There has been no proof in support of these exotic concepts.
When Mr. Hawking, in his A Brief History of Time says: “There are some solutions of the equations of GR in which it is possible for our astronaut to see a naked singularity: he may be able to avoid hitting the singularity and instead fall through a wormhole and come out in another region of the universe”, he is talking plain non-sense. He admits it in the next sentence, where he says meekly: “these solutions may be unstable”. He never explains how it is possible for any astronaut to see a naked singularity. Without giving any justification, he says that any future Unified Field Theory will use Mr. Feynman’s sum-over histories. But Mr. Feynman’s renormalization trick in sum-over histories is to sum the particle’s histories in imaginary time rather than in real time. Hence Mr. Hawking makes an assertion elsewhere that imaginary numbers are important because they include real numbers and more. By implication, he implies imaginary time includes real time and more! These magical mysteries are good selling tactics for fictions, but bad theories.
Black holes behave like a black-body – zero albedo. Now, let us apply the photo-electric effect to the black holes – particularly those that are known to exist at the center of galaxies. There is no dearth of high energy photons all around and most of it would have frequencies above the thresh-hold limit. Thus, there should be continuous ejection of not only electrons, but also x-rays. Some such radiations have already been noticed by various laboratories and are well documented. The flowing electrons generate a strong magnetic field around it, which appears as the sun-spots on the Sun. Similar effects would be noticed in the galaxies also. The high intensity magnetic fields in neutron stars are well documented. Thus the modern notion of black holes needs modification.
We posit that black holes are not caused by gravity, but due to certain properties of heavier quarks – specifically the charm and the strange quarks. We call these effects “jyoti-gou-aayuh” and the reflected sequence “gou-aayuh-jyoti” for protons and other similar bodies like the Sun and planet Jupiter. For neutrons and other similar bodies like the Earth, we call these “vaak-gou-dyouh” and “gou-dyouh-vaak” respectively. We will deal with it separately. For the present it would suffice that, the concept of waves cease to operate inside a black hole. It is a long tortuous spiral that leads a particle entering a black hole towards its center (we call it vrhtra). It is dominated by cool magnetic fields and can be thought of as real anti-matter. When it interacts with hot electric energy like those of stars and galaxies (we call it Indra vidyut), it gives out electromagnetic radiation that is described as matter and anti-matter annihilating each other.
The black-holes are identified due to the characteristic intense x-ray emission activity in its neighborhood implying the existence of regions of negative electric charge. The notion of black holes linked to singularity is self contradictory as hole implies a volume containing “nothing” in a massive substance, whereas the concept of volume is not applicable to singularity. Any rational analysis of the black hole must show that the collapsing star that creates it simply becomes denser. This is possible only due to the “boundary” of the stars moving towards the center, which implies dominance of negative charge. Since negative charge flows “inwards”, i.e., towards the center, it does not emit any radiation beyond its dimension. Thus, there is no interaction between the object and our eyes or other photographic equipment. The radiation that fills the intermediate space is not perceptible by itself. Hence it appears as black. Since space is only black and white, we cannot distinguish it from its surroundings. Hence the name black hole.
Electron shells are a region of negative charge, which always flows inwards, i.e., towards the nucleus. According to our calculation, protons carry a positive charge, which is 1/11 less than an electron. But this residual charge does not appear outside the atom as the excess negative charge flows inwards. Similarly, the black holes, which are surrounded by areas with negative charge, are not visible. Then how are the x-rays emitted? Again we have to go back to the Voyager data to answer this question. The so-called event horizon of the black hole is like the stagnation region in the outermost layer of the bubble around stars like the Sun. Here, the magnetic field is piled up, and higher-energy particles from inside appear to be leaking out into interstellar space. The outward speed of the solar wind diminishes to zero marking a thick “transition zone” at the edge of the heliosheath.
Something similar happens with a black hole. A collapsing star implies increased density with corresponding reduction in volume. The density cannot increase indefinitely, because all confined objects have mass and mass requires volume – however compact. It cannot lead to infinite density and zero volume. There is no need to link these to hypothetical tachyons, virtual particle pairs, quantum leaps and non-linear i-trajectories at 11-dimensional boson-massed fields in parallel universes. On the contrary, the compression of mass gives away the internal energy. The higher energy particles succeed in throwing out radiation from the region of the negative charge in the opposite direction, which appear as x-ray emissions. These negative charges, in turn, accumulate positively charged particles from the cosmic rays (we call this mechanism Emusha varaaha) to create accretion discs that forms stars and galaxies. Thus, we find black holes inside all galaxies and may be inside many massive stars.
On the other hand, gamma ray bursts are generated during super nova explosions. In this case, the positively charged core explodes. According to Coulomb’s law, opposite charges attract and same charges repeal each other. Hence the question arises, how does the supernova, or for that matter any star or even the nucleus, generate the force to hold the positively charged core together. We will discuss Coulomb’s law before answering this question.
Objects are perceived in broadly two ways by the sensory organs. The ocular, auditory and psychological functions related to these organs apparently follow action at a distance principle (homogenous field interaction). We cannot see something very close to the eye. There must be some separation between the eye and the object because it need a field to propagate the waves. The tactile, taste and olfactory functions are always contact functions (discrete interaction). This is proved by the functions of “mirror neurons”. Since the brain acts like the CPU joining all data bases, the responses are felt in other related fields in the brain also. When we see an event without actually participating in it, our mental activity shows as if we are actually participating in it. Such behavior of the neurons is well established in medical science and psychology.
In the case of visual perception, the neurons get polarized like the neutral object and create a mirror image impression in the field of our eye (like we prepare a casting), which is transmitted to the specific areas of brain through the neurons, where it creates the opposite impression in the sensory receptacles. This impression is compared with the stored memory of the objects in our brain. If the impression matches, we recognize the object as such or note it for future reference. This is how we see objects and not because light from the object reaches our retina. Only a small fraction of the incoming light from the object reaches our eyes, which can’t give full vision. We don’t see objects in the dark because there is no visible range of radiation to interact with our eyes. Thus, what we see is not the object proper, but the radiation emitted by it, which comes from the area surrounding its confinement - the orbitals. The auditory mechanism functions in a broadly similar way, though the exact mechanism is slightly different.
But when we feel an object through touch, we ignore the radiation because neither our eyes can touch nor our hands can see. Here the mass of our hand comes in contact with the mass of the object, which is confined. The same principle applies for our taste and smell functions. Till the object and not the field set up by it touches our tongue or nose (through convection or diffusion as against radiation for ocular perception), we cannot feel the taste or smell. Mass has the property of accumulation and spread. Thus, it joins with the mass of our skin, tongue or nose to give its perception. This way, what we see is different from what we touch. These two are described differently by the two perceptions. Thus we can’t get accurate inputs to model a digital computer. From the above description, it is clear that we can weigh and measure the dimensions of mass through touch, but cannot actually see it. This is bare mass. Similarly, we can see the effect of radiation, but cannot touch it. In fact, we cannot see the radiation by itself. This is bare charge. These characteristics distinguish bare charge from bare mass.
Astrophysical observations are pointing out to huge amounts of “dark matter” and “dark energy” that are needed to explain the observed large scale structure and cosmic dynamics. The emerging picture is a spatially flat, homogeneous Universe undergoing the presently observed accelerated phase. Despite the good quality of astrophysical surveys, commonly addressed as Precision Cosmology, the nature and the nurture of dark energy and dark matter, which should constitute the bulk of cosmological matter-energy, are still unknown. Furthermore, up till now, no experimental evidence has been found at fundamental level to explain the existence of such mysterious components. Let us examine the necessity for assuming the existence of dark matter and dark energy.
The three Friedmann models of the Universe are described by the following equation:
Matter density curvature dark energy
8 πG kc2 Λ
H2 = -------- ρ -- ---- + -----, where,
3 R2 3
H = Hubble’s constant. ρ = matter density of the universe. c = Velocity of light
k = curvature of the Universe. G = Gravitational constant. Λ = cosmological constant.
R = radius of the Universe.
In this equation, ‘R’ represents the scale factor of the Universe, and H is Hubble’s constant, which describes how fast the Universe is expanding. Every factor in this equation is a constant and has to be determined from observations - not derived from fundamental principles. These observables can be broken down into three parts: gravity (which is treated as the same as matter density in relativity), curvature (which is related to but different from topology) and pressure or negative energy given by the cosmological constant that holds back the speeding galaxies. Earlier it was generally assumed that gravity was the only important force in the Universe, and that the cosmological constant was zero. Thus, by measuring the density of matter, the curvature of the Universe (and its future history) was derived as a solution to the above equation. New data has indicated that a negative pressure, called dark energy, exists and the value of the cosmological constant is non-zero. Each of these parameters can close the expansion of the Universe in terms of turn-around and collapse. Instead of treating the various constants in real numbers, scientists prefer the ratio of the parameter to the value that matches the critical value between open and closed Universes. For example, if the density of matter exceeds the critical value, the Universe is assumed as closed. These ratios are called as Omega (subscript M for matter, Λ for the cosmological constant, k for curvature). For reasons related to the physics of the Big Bang, the sum of the various Omega is treated as equal to one. Thus: ΩM + ΩΛ + Ωk = 1.
The three primary methods to measure curvature are luminosity, scale length and number. Luminosity requires an observer to find some standard ‘candle’, such as the brightest quasars, and follow them out to high red-shifts. Scale length requires that some standard size be used, such as the size of the largest galaxies. Lastly, number counts are used where one counts the number of galaxies in a box as a function of distance. Till date all these methods have been inconclusive because the brightest, size and number of galaxies changes with time in a ways that, cosmologists have not yet figured out. So far, the measurements are consistent with a flat Universe, which is popular for aesthetic reasons. Thus, the curvature Omega is expected to be zero, allowing the rest to be shared between matter and the cosmological constant.
To measure the value of matter density is a much more difficult exercise. The luminous mass of the Universe is tied up in stars. Stars are what we see when we look at a galaxy and it is fairly easy to estimate the amount of mass tied up in self luminous bodies like stars, planets, satellites and assorted rocks that reflect the light of stars and gas that reveals itself by the light of stars. This contains an estimate of what is called the baryonic mass of the Universe, i.e. all the stuff made of baryons - protons and neutrons. When these numbers are calculated, it is found that Ω for baryonic mass is only 0.02, which shows a very open Universe that is contradicted by the motion of objects in the Universe. This shows that most of the mass of the Universe is not seen, i.e. dark matter, which makes the estimate of ΩM to be much too low. So this dark matter has to be properly accounted for in all estimates: ΩM = Ωbaryons + Ωdark matter
Gravity is measured indirectly by measuring motion of the bodies and then applying Newton’s law of gravitation. The orbital period of the Sun around the Galaxy gives a mean mass for the amount of material inside the Sun’s orbit. But a detailed plot of the orbital speed of the Galaxy as a function of radius reveals the distribution of mass within the Galaxy. Some scientists describe the simplest type of rotation as wheel rotation. Rotation following Kepler’s 3rd law is called planet-like or differential rotation. In this type of rotation, the orbital speeds falls off as one goes to greater radii within the Galaxy. To determine the rotation curve of the Galaxy, stars are not used due to interstellar extinction. Instead, 21-cm maps of neutral hydrogen are used. When this is done, one finds that the rotation curve of the Galaxy stays flat out to large distances, instead of falling off. This has been interpreted to mean that the mass of the Galaxy increases with increasing distance from the center.
There is very little visible matter beyond the Sun’s orbital distance from the center of the Galaxy. Hence the rotation curve of the Galaxy indicates a great deal of mass. But there is no light out there indicating massive stars. Hence it is postulated that the halo of our Galaxy is filled with a mysterious dark matter of unknown composition and type.
The equation: ΩM + ΩΛ + Ωk = 1 appears tantalizingly similar to the Mr. Fermi’s description of the three part Hamiltonian for the atom: H = HA + HR + HI. Here, H is 1. ΩM, which represents matter density is similar to HA, the bare mass as explained earlier. ΩΛ, which represents the cosmological constant, is similar to HR, the radiating bare charge. Ωk, which represents curvature of the universe, is similar to HI, the interaction. This indicates, as Mr. Mason A. Porter and Mr. Predrag Cvitanovic had found out, that the macro and the micro worlds share the same sets of mathematics. Now we will explain the other aberrations.
Cosmologists tell us that the universe is homogeneous on the average, if it is considered on a large scale. The number of galaxies and the density of matter turn out to be uniform over sufficiently great volumes, wherever these volumes may be taken. What this implies is that, the overall picture of the recessing cosmic system is observed as if “simultaneously”. Since the density of matter decreases because of the cosmological expansion, the average density of the universe can only be assumed to be the same everywhere provided we consider each part of the universe at the same stage of expansion. That is the meaning of “simultaneously”. Otherwise, a part would look denser, i.e., “younger” and another part less dense. i.e., “older” depending on the stage of expansion we are looking at. This is because light propagates at a fixed velocity. Depending upon our distance from the two areas of observation, we may be actually looking at the same time objects with different stages of evolution. The uniformity of density can only be revealed if we can take a snap-shot of the universe. But the rays that are used for taking the snap-shot have finite velocities. Thus, they can get the signals from distant points only after a time lag. This time lag between the Sun and the earth is more than 8 minutes. In the scale of the Universe, it would be billions of years. Thus, the “snap-shot” available to us will reveal the Universe at different stages of evolution, which cannot be compared for density calculations. By observing the farthest objects - the Quasars - we can know what they were billions of years ago, but we cannot know what they look like now.
Another property of the universe is said to be its general expansion. In the 1930’s, Mr. Edwin Hubble obtained a series of observations that indicated that our Universe began with a Creation event. Observations since 1930s show that clusters and super-clusters of galaxies, being at distances of 100-300 mega-parsec (Mpc), are moving away from each other. Hubble discovered that all galaxies have a positive red-shift. Registering the light from the distant galaxies, it has been established that the spectral lines in their radiation are shifted to the red part of the spectrum. The farther the galaxy; the greater the red-shift! Thus, the farther the galaxy, velocity of recession is greater creating an illusion that we are right at the center of the Universe. In other words, all galaxies appear to be receding from the Milky Way.
By the Copernican principle (we are not at a special place in the Universe), the cosmologists deduce that all galaxies are receding from each other, or we live in a dynamic, expanding Universe. The expansion of the Universe is described by a very simple equation called Hubble’s law; the velocity of the recession v of a galaxy is equal to a constant H times its distance d (v = Hd). Where the constant is called Hubble’s constant and relates distance to velocity in units of light years.
The problem of dark matter and dark energy arose after the discovery of receding galaxies, which was interpreted as a sign that the universe is expanding. We posit that all galaxies appear to be receding from the Milky Way because they are moving with different velocities while orbiting the galactic center. Just like some planets in the solar system appearing to be moving away at a very fast rate than others due to their motion around the Sun at different distances with different velocities, the galaxies appear to be receding from us. In cosmic scales, the observation since 1930 is negligible and cannot give any true indication of the nature of such recession. The recent findings support this view.
This cosmological principle - one of the foundations of the modern understanding of the universe - has come into question recently as astronomers find subtle but growing evidence of a special direction in space. The first and most well-established data point comes from the cosmic microwave background (CMB), the so-called afterglow of the big bang. As expected, the afterglow is not perfectly smooth - hot and cold spots speckle the sky. In recent years, however, scientists have discovered that these spots are not quite as randomly distributed as they first appeared - they align in a pattern that point out a special direction in space. Cosmologists have theatrically dubbed it the “axis of evil”. More hints of a cosmic arrow come from studies of supernovae, stellar cataclysms that briefly outshine entire galaxies. Cosmologists have been using supernovae to map the accelerating expansion of the universe. Detailed statistical studies reveal that supernovae are moving even faster in a line pointing just slightly off the axis of evil. Similarly, astronomers have measured galaxy clusters streaming through space at a million miles an hour toward an area in the southern sky. This proves our theory.
Thus, the mass density calculation of the universe is wrong. As we have explained in various forums, gravity is not a single force, but a composite force of seven. The seventh component closes in the galaxies. The other components work in pairs and can explain the Pioneer anomaly, the deflection of Voyager beyond Saturn’s orbit and the Fly-by anomalies. We will discuss this separately.
Extending the principle of bare mass further, we can say that from quarks to “neutron stars” and “black holes”, the particles or bodies that exhibit strong interaction; i.e., where the particles are compressed too close to each other or less than 10-15 m apart, can be called bare mass bodies. It must be remembered that the strong interaction is charge independent: for example, it is the same for neutrons as for protons. It also varies in strength for quarks and proton-neutrons. Further, the masses of the quarks show wide variations. Since mass is confined field, stronger confinement must be accompanied with stronger back reaction due to conservation laws. Thus, the outer negatively charged region must emit its signature intense x-ray in black holes and strangeness in quarks. Since similar proximity like the proton-neutrons are seen in black holes also, it is reasonable to assume that strong force has a macro equivalent. We call these bodies “Dhruva” – literally meaning the pivot around which all mass revolves. This is because, be they quarks, nucleons or black-holes, they are at the center of the all bodies. They are not directly perceptible. Hence it is dark matter. It is also bare mass without radiation.
When the particles are not too close together, i.e., intermediate between that for the strong interaction and the electromagnetic interaction, they behave differently under weak interaction. The weak interaction has distinctly different properties. This is the only known interaction where violation of parity (spatial symmetry), and violation of the symmetry (between particles and anti-particles) has been observed. The weak interaction does not produce bound states (nor does it involve binding energy) – something that gravity does on an astronomical scale, the electromagnetic force does at the atomic level, and the strong nuclear force does inside nuclei. We call these bodies “Dhartra” – literally meaning that which induces fluidity. It is the force that constantly changes the relation between “inner space” and “outer space” of the particle without breaking its dimension. Since it causes fluidity, it helps in interactions with other bodies. It is also responsible for Radio luminescence.
There are other particles that are not confined in any dimension. They are bundles of energy that are intermediate between the dense particles and the permittivity and permeability of free space – bare charge. Hence they are always unstable. Dividing them by c2 does not indicate their mass, but it indicates the energy density against the permittivity and permeability of the field, i.e., the local space, as distinguished from “free space”. They can move out from the center of mass of a particle (gati) or move in from outside (aagati), when they are called its anti-particle. As we have already explained, the bare mass is not directly visible to naked eye. The radiation or bare charge per se is also not visible to naked eye. When it interacts with any object, then only that object becomes visible. When the bare charge moves in free space, it illuminates space. This is termed as light. Since it is not a confined dense particle, but moves through space like a wave moving through water, its effect is not felt on the field. Hence it has zero mass. For the same reason, it is its own anti-particle.
Some scientists link electric charge to permittivity and magnetism to permeability. Permittivity of a medium is a measure of the amount of charge of the same voltage it can take or how much resistance is encountered when forming an electric field in a medium. Hence materials with high permittivity are used as capacitors. Since addition or release of energy leads the electron to jump to a higher or a lower orbit, permittivity is also linked to rigidity of a substance. The relative static permittivity or dielectric constant of a solvent is a relative measure of its polarity, which is often used in chemistry. For example, water (very polar) has a dielectric constant of 80.10 at 20 °C while n-hexane (very non-polar) has a dielectric constant of 1.89 at 20 °C. This information is of great value when designing separation.
Permeability of a medium is a measure of the magnetic flux it exhibits when the amount of charge is changed. Since magnetic field lines surround the object effectively confining it, some scientists remotely relate it to density. This may be highly misleading, as permeability is not a constant. It can vary with the position in the medium, the frequency of the field applied, humidity, temperature, and other parameters, such as the strength of the magnetic field, etc. Permeability of vacuum is treated as 1.2566371×10−60); the same as that of hydrogen, even though susceptibility χm (volumetric SI) of vacuum is treated as 0, while that of hydrogen is treated as −2.2×10−9. Permeability of air is taken as 1.00000037. This implies vacuum is full of hydrogen only.
This is wrong because only about 81% of the cosmos consists of hydrogen and 18% helium. The temperature of the cosmic microwave back-ground is about - 2.73k, while that of the interiors of galaxies goes to millions of degrees of k. Further, molecular hydrogen occurs in two isomeric forms. One with its two proton spins aligned parallel to form a triplet state (I = 1, α1α2, (α1β2 + β1α2)/21/2, or β1β2 for which MI = 1, 0, −1 respectively) with a molecular spin quantum number of 1 (½+½). This is called ortho-hydrogen. The other with its two proton-spins aligned anti-parallel form a singlet (I = 0, (α1β2 – β1α2)/21/2 MI = 0) with a molecular spin quantum number of 0 (½-½). This is called para-hydrogen. At room temperature and thermal equilibrium, hydrogen consists of 25% para-hydrogen and 75% ortho-hydrogen, also known as the “normal form”.
The equilibrium ratio of ortho-hydrogen to para-hydrogen depends on temperature, but because the ortho-hydrogen form is an excited state and has a higher energy than the para-hydrogen form, it is unstable. At very low temperatures, the equilibrium state is composed almost exclusively of the para-hydrogen form. The liquid and gas phase thermal properties of pure para-hydrogen differ significantly from those of the normal form because of differences in rotational heat capacities. A molecular form called protonated molecular hydrogen, or H+3, is found in the inter-stellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. It has also been observed in the upper atmosphere of the planet Jupiter. This molecule is relatively stable in the environment of outer space due to the low temperature and density. H+3 is one of the most abundant ions in the Universe. It plays a notable role in the chemistry of the interstellar medium. Neutral tri-atomic hydrogen H3 can only exist in an excited from and is unstable.
In 18th Century, before the modern concepts of atomic and sub-atomic particles were known, Mr. Charles Augustin de Coulomb set up an experiment using the early version of what we call a Torsion Balance to observe how charged pith balls reacted to each other. These pith balls represented point charges. However, the point charges are charged bodies that are very small when compared to the distance between them. Mr. Coulomb observed two behaviors about electric force:
1. The magnitude of electric force between two point charges is directly proportional to the product of the charges.
2. The magnitude of the electric force between two point charges is inversely proportional to the square of the distance between them.
The general description of Coulomb’s law overlooks some important facts. The pith balls are spherical in shape. Thus, he got an inverse square law because the spheres emit a spherical field, and a spherical field must obey the inverse square law because the density of spherical emission must fall off inversely with the distance. The second oversight is the emission field itself. It is a real field with its own mechanics, real photons and real energy as against a virtual field with virtual photons and virtual energy used in QED, QCD and QFT where quanta can emit quanta without dissolving in violation of conservation laws. If the electromagnetic field is considered to be a real field with real energy and mass equivalence, all the mathematics of QED and QFT would fail. In 1830’s, Faraday assumed that the “field” was non-physical and non-mechanical and QED still assumes this. The Electromagnetic field, like the gravitational field, obeys the inverse square law because the objects in the field from protons to stars are spheres. Coulomb’s pith balls were spheres. The field emitted by these is spherical. The field emitted by protons is also spherical. This determines the nature of charges and forces.
As we have repeatedly points out, multiplication implies non-linearity. It also implies two dimensional fields. A medium or a field is a substance or material which carries the wave. It is a region of space characterized by a physical property having a determinable value at every point in the region. This means that if we put something appropriate in a field, we can then notice “something else” out of that field, which makes the body interact with other objects put in that field in some specific ways, that can be measured or calculated. This “something else” is a type of force. Depending upon the nature of that force, the scientists categorize the field as gravity field, electric field, magnetic field, electromagnetic field, etc. The laws of modern physics suggest that fields represent more than the possibility of the forces being observed. They can also transmit energy and momentum. Light wave is a phenomenon that is completely defined by fields. We posit that like a particle, the field also has a boundary, but unlike a particle, it is not a rigid boundary. Also, its intensity or density gradient falls off with distance. A particle interacts with its environment as a stable system - as a whole. Its equilibrium is within its dimensions. It is always rigidly confined till its dimension breaks up due to some external or internal effect. A field, on the other hand, interacts continuously with its environment to bring in uniform density – to bring in equilibrium with the environment. These are the distinguishing characteristics that are revealed in fermions (we call these satyam) and bosons (we call these rhtam) and explain superposition of states.
From the above description, it is apparent that there are two types of fields: One is the universal material field in which the other individual energy sub-fields like electric field, magnetic field, electromagnetic field, etc appear as variables. We call these variable sub-fields as “jaala” – literally meaning a net. Anything falling in that net is affected by it. The universal material field also is of two types: stationary fields where only impulses and not particles or bodies are transmitted and mobile fields where objects are transmitted. The other category of field explains conscious actions.
Coulomb’s law states that the electrical force between two charged objects is directly proportional to the product of the quantity of charge on the objects and is inversely proportional to the square of the distance between the centers of the two objects. The interaction between charged objects is a non-contact force which acts over some distance of separation. In equation form, Coulomb’s law is stated as:
where Q1 represents the quantity of charge on one object in Coulombs, Q2 represents the quantity of charge on the other object in Coulombs, and d represents the distance between the centers of the two objects in meters. The symbol k is the proportionality constant known as the Coulomb’s law constant. To find a electric force on one atom, we need to know the density of the electromagnetic field said to be mediated by photons relative to the size of the atom, i.e. how many photons are impacting it each second and sum up all these collisions. However, there is a difference in this description when we move from micro field to macro field. The interactions at the micro level are linear – up and down quarks or protons and electrons in equal measure. However, different types of molecular bonding make these interactions non-linear at macro level. So a charge measured at the macro level is not the same as a charge measured at the quantum level.
It is interesting to note that according to the Coulomb’s law equation, interaction between a charged particle and a neutral object (where either Q1 or Q2 = 0) is impossible as in that case the equation becomes meaningless. But it goes against everyday experience. Any charged object - whether positively charged or negatively charged - has an attractive interaction with a neutral object. Positively charged objects and neutral objects attract each other; and negatively charged objects and neutral objects attract each other. This also shows that there are no charge neutral objects and the so-called charge neutral objects are really objects in which both the positive and the negative charges are in equilibrium. Every charged particle is said to be surrounded by an electric field - the area in which the charge exerts a force. This implies that in charge neutral objects, there is no such field – hence no electric force should be experienced. It is also said that particles with nonzero electric charge interact with each other by exchanging photons, the carriers of the electromagnetic force. If there is no field and no force, then there should be no interaction – hence no photons. This presents a contradiction.
Charge in Coulomb’s law has been defined in terms of Coulomb’s. One Coulomb is one Ampere second. Electrostatics describes stationary charges. Flowing charges are electric currents. Electric current is defined as a measure of the amount of electrical charge transferred per unit time through a surface (the cross section of a wire, for example). It is also defined as the flow of electrons. This means that it is a summed up force exerted by a huge number of quantum particles. It is measured at the macro level. The individual charge units belong to the micro domain and cannot be measured.
Charge has not been specifically defined except that it is a quantum number carried by a particle which determines whether the particle can participate in an interaction process. This is a vague definition. The degree of interaction is determined by the field density. But density is a relative term. Hence in certain cases, where the field density is more than the charge or current density, the charge may not be experienced outside the body. Such bodies are called charge neutral bodies. Introduction of a charged particle changes the density of the field. The so-called charge neutral body reacts to such change in field density, if it is beyond a threshold limit. This limit is expressed as the proportionality constant in Coulomb’s law equation. This implies that, a charged particle does not generate an electric field, but only changes the intensity of the field, which is experienced as charge. Thus, charge is the capacity of a particle to change the field density, so that other particles in the field experience the change. Since such changes lead to combining of two particles by redistribution of their charge to affect a third particle, we define charge as the creative competence (saamarthya sarva bhaavaanaam).
Current density is the time rate of change of charge (I=dQ/dt). Since charge is measured in coulombs and time is measured in seconds, an ampere is the same as a coulomb per second. This is an algebraic relation, not a definition. The ampere is that constant current, which, if maintained in two straight parallel conductors of infinite length of negligible circular cross-section, and placed one meter apart in vacuum, would produce between these conductors a force equal to 2 × 10−7 newton per meter of length. This means that the coulomb is defined as the amount of charge that passes through an almost flat surface (a plane) when a current of one ampere flows for one second. If the breadth of the so-called circular cross-section is not negligible, i.e., if it is not a plane or a field, this definition will not be applicable. Thus, currents flow in planes or fields only. Electric current is not a vector quantity, as it does not flow in free space through diffusion or radiation in a particular direction (like muon or tau respectively). Current is a scalar quantity as it flows only through convection towards lower density – thus, within a fixed area – not in any fixed direction. The ratio of current to area for a given surface is the current density. Despite being the ratio of two scalar quantities, current density is treated as a vector quantity, because its flow is dictated according to fixed laws by the density and movement of the external field. Hence, it is defined as the product of charge density and velocity for any location in space.
The factor d2 shows that it depends on the distance between the two bodies, which can be scaled up or down. Further, since it is a second order term, it represents a two-dimensional field. Since the field is always analogous, the only interpretation of the equation is that it is an emission field. The implication of this is that it is a real field with real photons with real energy and not a virtual field with virtual photons or messenger photons described by QED and QCD, because that would violate conservation laws: a quantum cannot emit a virtual quantum without first dissolving itself. Also complex terminology and undefined terms like Hamiltonians, tensors, gauge fields, complex operators, etc., cannot be used to real fields. Hence, either QED and QCD are wrong or Coulomb’s law is wrong. Alternatively, either one or the other or both have to be interpreted differently.
Where the external field remains constant, the interaction between two charges is reflected as the non-linear summation (multiplication) of the effect of each particle on the field. Thus, if one quantity is varied, to achieve the same effect, the other quantity must be scaled up or down proportionately. This brings in the scaling constant, which is termed as k - the proportionality constant relative to the macro density. Thus, the Coulomb’s law gives the correct results. But this equation will work only if the two charges are contained in spherical bodies, so that the area and volume of both can be scaled up or down uniformly by varying the diameter of each. Coulomb’s constant can be related to the Bohr radius. Thus, in reality, it is not a constant, but a variable. This also shows that the charges are emissions in a real field and not mere abstractions. However, this does not prove that same charge repels and opposite charges attract.
The interpretation of Coulomb’s law that same charge repels played a big role in postulating the strong interaction. Protons exist in the nucleus at very close quarters. Hence they should have a strong repulsion. Therefore it was proposed that an opposite force overwhelmed the charge repulsion. This confining force was called the strong force. There is no direct proof of its existence. It is still a postulate. To make this strong force work, it had to change very rapidly, i.e., it should turn on only at nuclear distances, but turn off at the distance of the first orbiting electron. Further, it should be a confining force that did not affect electrons. Because the field had to change so rapidly (should have such high flux), that it had to get extremely strong at even smaller distances. Logically, if it got weaker so fast at greater distances, it had to get stronger very fast at smaller distances. In fact, according to the equations, it would approach infinity at the size of the quark. This didn’t work in QCD, since the quarks needed their freedom. They could not be infinitely bound, since this force would not agree with experimental results in accelerators. Quarks that were infinitely bound could not break up into mesons.
For calculate the flux, one must calculate how the energy of the field approaches the upper limit. This upper limit is called an asymptote. An asymptote is normally a line on a graph that represents the limit of a curve. Calculating the approach to this limit can be done in any number of ways. Mr. Lev Landau, following the principles of QED, developed a famous equation to find what is now called a Landau Pole - the energy at which the force (the coupling constant) becomes infinite. Mr. Landau found this pole or limit or asymptote by subtracting the bare electric charge e from the renormalized or effective electric charge eR:
The value for bare electric charge e has been obtained by one method and the value for effective electric charge eR has been obtained by another method to match the experimental value. One value is subtracted from the other value to find a momentum over a mass (which is of course a velocity. If we keep the renormalized variable eR constant, we can find where the bare charge becomes singular. Mr. Landau interpreted this to mean that the coupling constant had gone to infinity at that value, and called that energy the Landau pole. In any given experiment, the electron has one and only one charge, so that either e or eR must be incorrect. No one has ever measured the “bare charge”. It has never been experimentally verified. All experiments show only the effective charge. Bare charge is a mathematical assumption. If two mathematical descriptions give us two different values, both cannot be correct in the same equation. Hence either the original mathematics or the renormalized mathematics must be wrong. Thus Mr. Landau has subtracted an incorrect value from a correct value, to achieve real physical information because first he had de-normalized the equation by inventing the infinity! We have already shown the fallacies inherent in this calculation while discussing division by zero and Lorentz force law. Thus, Göckeler et. al., (arXiv:hep-th/9712244v1) found that: “A detailed study of the relation between bare and renormalized quantities reveals that the Landau pole lies in a region of parameter space which is made inaccessible by spontaneous chiral symmetry breaking”.
It is interesting to note that the charge of electron has been measured by the oil drop experiment, but the charge of protons and neutrons have not been measured as it is difficult to isolate them. Historically, proton has been assigned charge of +1 and neutron charge zero on the assumption that the atom is charge neutral. But the fact that most elements exist not as atoms, but molecules, shows that the atoms are not charge neutral. We have theoretically derived the charges of quarks as -4/11 and +7/11 instead of the generally accepted value of - 13 or + 23. This makes the charges of protons and neutrons +10/11 and -1/11 respectively. This implies that both proton and neutron have a small amount of negative charge (-1 + 10/11) and the atom as a whole is negatively charged. This residual negative charge is not felt, as it is directed towards the nucleus.
According to our theory, only same charges attract. Since proton and electron combined has the same charge as the neutron, they co-exist as stable structures. Already we have described the electron as like the termination shock at heliosheath that encompasses the “giant bubble” encompassing the Solar system, which is the macro equivalent of the extra-nuclear space. Thus, the charge of electron actually is the strength of confinement of the extra-nuclear space. Neutron behaves like the solar system within the galaxy – a star confined by its heliospheric boundary. However, the electric charges (-1/11 for proton + electron and –1/11 for neutron) generate a magnetic field within the atom. This doubling in the intensity of the magnetic field in the stagnation region, i.e., boundary region of the atom, behaves like cars piling up at a clogged freeway off-ramp. The increased intensity of the magnetic field generates inward pressure from inter-atomic space compacting it. As a result, there is a 100-fold increase in the intensity of high-energy electrons from elsewhere in the field diffusing into the atom from outside. This leads to 13 different types of interactions that will be discussed separately.
When bare charges interact, they interact in four different ways as flows:
• Total (equal) interaction between positive and negative charges does not change the basic nature of the particle, but only increases their mass number (pushtikara).
• Partial (unequal) interaction between positive and negative charges changes the basic nature of the particle by converting it into an unstable ion searching for a partner to create another particle (srishtikara).
• Interaction between two negative charges does not change anything (nirarthaka) except increase in magnitude when flowing as a current.
• Interaction between two positive charges become explosive (vishphotaka) leading to fusion reaction at micro level or supernova explosion at macro level with its consequent release of energy.
Since both protons and neutrons carry a residual negative charge, they do not explode, but co-exist. But in a supernova, it is positively charged particles only, squeezed over a small volume, forcing them to interact. As explained above, it can only explode. But this explosion brings the individual particles in contact with the surrounding negative charge. Thus, higher elements from iron onwards are created in such explosion, which is otherwise impossible.
The micro and the macro replicate each other. Mass and energy are not convertible at macro and quantum levels, but are inseparable complements. They are convertible only at the fundamental level of creation (we call it jaayaa). Their inter se density determines whether the local product is mass or energy. While mass can be combined in various proportions, so that there can be various particles, energy belongs to only one category, but appears differently because of its different interaction with mass. When both are in equilibrium, it represents the singularity. When singularity breaks, it creates entangled pairs of conjugates that spin. When such conjugates envelop a state resembling singularity, it gives rise to other pairs of forces. These are the five fundamental forces of Nature – gravity that generates weak and electromagnetic interaction, which leads to strong interaction and radioactive disintegration. Separately we will discuss in detail the superposition of states, entanglement, seven-component gravity and fractional (up to 1/6) spin. We will also discuss the correct charge of quarks (the modern value has an error component of 3%) and derive it from fundamental principles. From this we will theoretically derive the value of the fine structure constant (7/960 at the so-called zero energy level and 7/900 at 80 GeV level). |
644c025bea250f09 | From Uncyclopedia, the content-free encyclopedia
Jump to: navigation, search
Imagine what this creature would look like if it didn't exist. That is exactly what the Anti-Lemming looks like.
“If a Lemming comes into contact with an Anti-Lemming the function of the two Lemmings cancel out, thus resulting in an explosion”
~ Robert Oppenheimer on Anti-Lemmings
“If a Lemming and a Lemming come into contact, there is no explosion. There may however be sex”
~ Max Planck on Lemmings
“If an Anti-Lemming and an Anti-Lemming come into contact there is an Anti-explosion”
~ Nikola Tesla on Anti-Lemmings
The Anti-Lemming (otherwise known as the Gnimmel) is the complete polar opposite of the Lemming. Many respectable scientists, namely Mahatama Ghandhi, have dedicated many seconds of their life into proving the existence of the Anti-Lemming. Robert Oppenheimer was the first to mathematically prove the existence of the Anti-Lemming, however he differentiated one too many equations which consequently lead to the disproving of his existence, hence it never happened. Another scientist by the name of Ernest Rutherford provided an alternative view that revolutionized the world's perception of Anti-Lemmings. "You see, we must first view Lemmings as a unit of work. Lemmings can be miners, climbers, builders, floaters or suicide bombers" said Ernest Rutherford in an interview on sheep shagging in the town of Nelson, New Zealand. "If Lemmings are a unit of work, then Anti-Lemmings must clearly be a unit of anti-work". This train of thought eventually lead to the first ever splitting of the Anti-Lemming, causing many unexpected disasters, such as Chernobyl.
edit Mathematical Proof of Anti-Lemmings
Robert Oppenheimer, who does not not exist due to the self mathematical proving of his non-existence, was the first to hypothesize and thus express the mathematical function of the Anti-Lemming. Which was as follows (plus or minus an x or a y somewhere):
AntiLemming = e^{(x^2 - \sqrt{x})} .i. (y^42) / (Lemming^2) dy/dx + C
The exclusion of +C in the above proof of Anti-Lemmings has resulted in the many failures of students' mathematics exams which is also proportional to the global number of McDonalds staff members. This proof was elaborated by Mahatama Ghandhi who was reported as saying "If 3 is 3, then the opposite of 3 must be -3". Thus if an Anti-Lemming is the opposite of the Lemming then it can be said that:
AntiLemming = 1 - Lemming
Upon seeing Ghandhi's simplified expression of an Anti-lemming Robert Oppenheimer immediately performed his mathematical suicide at 10:50 pm. This was also on the same day that John Lennon of The Beatles was found dead outside The Dakota, New York. Conspiracy theorists claim this was the Anti-Lemmings' plan all along. The root of the Anti-Lemming serves not only to vent sexual frustration but also results in an imaginary function as opposed to a real one, this is evident in the following equation:
i.AntiLemming = root(1 - Lemming)
As the imaginary Anti-Lemming is the complex of the real, but already imagined, Anti-Lemming it is believed that the correct terminology for the imaginary Anti-Lemming is the AntiAntiLemming which differs from the real Lemming in that it has two Antis before its name. The complex of the Anti-Lemming has been the foundation of answering various existential arguments such as "is there a God?", "can we ever achieve world peace?" and "Why Chuck Norris is better than everyone else". When performing mathematical operations on the Anti-Lemming it is imperative that one does not divide by zero. Doing so results in the end of the world, which is not a variable, it is certain:
End.of.the.World = AntiLemming / 0
The only being who can divide an Anti-Lemming, or anything for that matter, by zero is Chuck Norris. Dividing Chuck Norris by zero gives rise to both the Anti-Lemming and the end of the world in an equation too long to witness.
edit Splitting of the Anti-Lemming
Ernest Rutherford was the first to split the Anti-Lemming, following Young Einstein whom was the first to split the beer atom in Australia. The composition of the Anti-Lemming was shown to consist of sub-atomic particles, such as Lem and Ming. Rutherford experimentally proved that the combining of both Lem and Ming particles resulted in the production of a Lemming. Yang, however, was not a component of the Lemming structure and was so depressed that Ming had cheated on her with Lem that she drank every night and ended up working in a stripper bar. Rutherford further showed that for the reaction to occur Lem and Ming must collide with sufficient enough energy, if Lem and Ming do not have the minimal amount of energy for the reaction to take place Lem is left feeling very self-conscious and Ming brings up an argument about signing divorce papers.
While training hard to split his Anti-Lemming Max Planck accidentally evolved his Anti-Lemming into a Taco upon promoting the Lem particle to an excited state (which pleased Ming very much)
The Taco was shown to be the final evolved form of the Anti-Lemming by Max Planck after training hard in Pokemon Orange
. Thereafter the Taco learned many new moves such as "Harden" which made it a formidable foe, but Max still missed his Anti-Lemming and wished he had never tried so hard to split it in the first place.
Charles Darwin had many troubles accepting the evolution of the Anti-Lemming into the Taco, though this was only because he could never capture a Taco in Pokemon Orange because his only Pokemon was a useless Arceus. Nikola Tesla was the only known scientist to have ever managed to split the Anti-Lemming four times into eight distinct pieces each served on fresh bread along with cheese, tomato and capsicum in the form of a pizza. Consuming of the Anti-Lemming pizza has shown to lead to undesirable consequences such as thallium poisoning, head implosions, radiation leaks, internal bleeding, cancer and diarrhea. This may either be due to the self-splitting of the Anti-Lemming inside the body of the consumer, or simply the consumer just being naturally retarded, like Madonna Ciccone.
The splitting of the Anti-Lemming gave rise to much protest from religious fractions who were indeterminately sure that splitting an Anti-Lemming would release a mass amount of energy and end the world. However it was already proven at that time that the only way to end the world using an Anti-Lemming was to divide it by zero, which was largely unnoticed by the religious community at the time because they were too busy sacrificing animals and burning witches because God told them to. The revolutionary work of Anti-Lemming splitting was not recognized truly until the publication of the Schrödinger equation which modeled the entire insides of the Anti-Lemming with and without clothes. However upon the publication all the scientists decided to waste their time extrapolating lesser important things such as atoms and molecules; the study of the Anti-Lemming was forever forgotten.
edit Death of the Anti-Lemming
The death of the Anti-Lemming notably occurred when all of the Anti-Lemmings decided to stop being non-conforming assholes and started acting like regular lemmings. This resulted in the Anti-Lemmings jumping off various cliffs in aspirations of their retarded arctic rodent cousins. This popular tradition eventually found its way into a popular video game which unfortunately only starred Lemmings as all the Anti-Lemmings had either evolved or been split or mathematically disproved by that stage. The development of this game is what distracted the Ukrainian workers of the Chernobyl nuclear power plant who, had they not been exercising their thumbs on the mega drive, may have prevented an very inconvenient and tragic historical event.
Personal tools |
0a60822306aa7166 | Next Article in Journal
Previous Article in Journal
On the Origin of Metadata
Information 2012, 3(4), 809-831; doi:10.3390/info3040809
Implementation of Classical Communication in a Quantum World
Chris Fields
815 East Palace # 14, Santa Fe, NM 87501, USA; E-Mail:; Tel.: +1-505-995-9859
Received: 11 July 2012; in revised form: 31 October 2012 / Accepted: 6 December 2012 /
Published: 13 December 2012
: Observations of quantum systems carried out by finite observers who subsequently communicate their results using classical data structures can be described as “local operations, classical communication” (LOCC) observations. The implementation of LOCC observations by the Hamiltonian dynamics prescribed by minimal quantum mechanics is investigated. It is shown that LOCC observations cannot be described using decoherence considerations alone, but rather require the a priori stipulation of a positive operator-valued measure (POVM) about which communicating observers agree. It is also shown that the transfer of classical information from system to observer can be described in terms of system-observer entanglement, raising the possibility that an apparatus implementing an appropriate POVM can reveal the entangled system-observer states that implement LOCC observations.
decoherence; einselection; emergence; entanglement; quantum-to-classical transition; virtual machines
1. Introduction
Suppose spatially-separated observers Alice and Bob each perform local measurements on a spatially-extended quantum system—for example, a pair of entangled qubits in an asymmetric Bell state—and afterwards communicate their experimental outcomes to each other. This “local operations, classical communication” (LOCC, e.g., [1] Chapter 12) scenario characterizes quantum key distribution, preparation of the initial states and subsequent observation of the final states of quantum computers, and practical laboratory investigations of spatially-extended quantum systems; indeed LOCC characterizes all situations in which two or more observers interact with a quantum system and then report their observations by encoding them into sharable classical data structures. Formal descriptions of LOCC scenarios generally specify the quantum system S with which the observers interact by explicitly specifying its quantum degrees of freedom and hence its Hilbert space Information 03 00809 i001; in addition, they typically explicitly specify the “prepared” quantum state Information 03 00809 i002 with which the observers interact, for example by an expression such as “ Information 03 00809 i003” where Information 03 00809 i004 and Information 03 00809 i005 are basis vectors and “A” and “B” name Alice and Bob, respectively. The “local operations” are generally dealt with cursorily: Alice and Bob are said to measure spin or polarization, for example, with the details of the apparatus used to do so, if any are given, relegated to the Methods section. The “classical communication” between Alice and Bob is rarely discussed at all. Understanding LOCC in physical terms, however, requires not just understanding the quantum state being observed, but understanding both the “local operations” and the “classical communication” as physical processes.
Let us begin with classical communication. Any finite message from Bob to Alice can be represented as a finite sequence of classical bits. It must, moreover, be encoded in some physical medium [2]—notes in a logbook, for example, or an email message, or coherent vibrations of air molecules. Bob encodes the message and Alice receives it by performing local operations on the physical medium employed for transmission. Successful transmission requires, therefore, that Alice monitor the medium for messages, and that Alice and Bob share an encoding/decoding scheme—a data structure with Information 03 00809 i006 and Information 03 00809 i007 methods—as well as a semantics for that data structure that renders the message meaningful. These requirements are independent of whether Alice and Bob are human beings or non-human information-processing machines; two computers attached to the internet must share a communication protocol (e.g., Information 03 00809 i008) and must share assumptions about both the syntax and semantics of the data structures employed to encode transmitted messages.
The local operations performed by Alice and Bob have, therefore, two distinct targets. Alice and Bob must each operate locally on S to extract classical information, and must each operate locally on their shared communication medium to either encode (Bob) or decode (Alice, and Bob if he checks his encoding) the classical information contained in the transmitted message. Most discussions of LOCC acknowledge that the interactions with S involve quantum measurement; most neglect the fact that, if quantum theory is assumed to be universal, the encoding and decoding steps also involve interactions with a quantum system: the physical medium of communication. Most, moreover, neglect the fact that Alice and Bob are themselves quantum systems. The purpose of the present paper is to examine LOCC from a perspective that acknowledges these facts; it is, therefore, to ask what is required to implement LOCC in a quantum world.
The next section, “Preliminaries”, discusses the fundamental assumption that quantum theory is universal and two of its consequences: That the extraction of classical information from quantum systems can be represented by the actions of positive operator-valued measures (POVMs, reviewed by [1] Chapter 2), and that observers must deploy POVMs to identify quantum systems of interest. The third section, “Decompositional equivalence and its consequences”, discusses a second fundamental assumption: that the universe as a whole exhibits a symmetry, decompositional equivalence, that allows alternative tensor-product structures (TPSs) for a single Hilbert space [3]. Like the assumption of universality, decompositional equivalence is an empirical assumption; if it is true, physical dynamics cannot depend in any way on TPSs that may be specified as defining “systems” of interest. In a universe satisfying decompositional equivalence, system-environment decoherence, which depends for its definition on the specification of a TPS, can have no physical consequences, and hence can neither create nor alter physical encodings of classical information. Observers cannot, therefore, take for granted physical encodings by their shared environment of either the boundaries or the pointer states of specific systems of interest, as is proposed in the “environment as witness” formulation of decoherence theory [4,5] and quantum Darwinism [6,7]. The fourth section, “Decoherence as semantics”, shows that decoherence can be represented as the action of a POVM, and hence as being a semantic or model-theoretic mapping from physical systems to classical data structures, and in particular to classical virtual machines. It is shown that the semantic consistency conditions for constructing such mappings are those familiar from the consistent histories formulation of quantum measurement (e.g., [8]). The fifth section, “Observation as entanglement”, returns to the question of how multiple observers in a LOCC setting identify and determine the state of a single system and then communicate their results. It shows that LOCC requires an infinite regress of assumptions regarding prior classical communications between the observers involved. In the absence of further assumptions, therefore, observations under LOCC conditions cannot be carried out in a universe characterized by both universal quantum theory and decompositional equivalence. It is then shown that the classical correlation between the states of an observer and an observed system produced by the action of a POVM would result from observer-system entanglement, and that such a correlation would be perfect if the entanglement was monogamous. Hence observation mediated by a POVM can be regarded as an alternative formal description of quantum entanglement; the transfer of classical information such entanglement enables is independent of system boundaries and relative, for any third party, to the specification of an appropriate basis for the joint system-observer state. While this result renders the explanation of classical communication in terms of an observer-independent physical process of “emergence” unattainable, it offers the possibility that an apparatus implementing an appropriate POVM could reveal the specific system-observer entanglements that implement the observation of classical outcomes. The paper concludes that the appearance of shared public classicality in the physical world is fully analogous to the appearance of algorithm instantiation in classical computer science: Both are cases of a shared jointly stipulated semantic interpretation.
2. Preliminaries
2.1. Assumption: Quantum Theory is Universal
The first and most fundamental assumption made here is that quantum theory is universal: All physical systems are quantum systems. The universe U, in particular, is a physical system; it is therefore a quantum system, and can be characterized by a Hilbert space Information 03 00809 i009 comprising a collection of quantum degrees of freedom. The universe is moreover, as assumed by Everett [9], not part of anything else; it is an isolated quantum system. The evolution of the universal quantum state Information 03 00809 i010, therefore, satisfies a Schrödinger equation Information 03 00809 i011, where HU is a deterministic universal Hamiltonian. This assumption rules out any objective non-unitary “collapse” of Information 03 00809 i010; it amounts to the adoption of what Landsman [10] calls “stance 1” regarding quantum theory, a stance that is realist about quantum states, and therefore demands an explanation for the appearance of classicality. All available experimental evidence is consistent with this universality assumption [11]. Alice, Bob, the systems that they observe and the systems that they employ to encode classical communications are, on this assumption, all collections of quantum degrees of freedom evolving under the action of the universal Hamiltonian HU.
The assumption that all physical systems are quantum systems clearly does not entail that all descriptions of physical systems are quantum-theoretical descriptions. Some descriptions are classical; others are quantum-theoretical. Classical descriptions of physical systems are in some cases (e.g., for billiard balls) sufficient for practical purposes, while in other cases (e.g., for electrons) they are not. The observable world appears classical to human observers employing their unaided senses; this appearance will be referred to as “observational classicality”. Human observers, moreover, record and communicate their observations using classical data structures, as do all artificial observers thus far constructed by humans. Hence all descriptions of physical, i.e., quantum systems, whether they are classical descriptions or quantum-theoretical descriptions, are both recorded for future access and communicated using classical data structures, regardless of whether the observers involved are humans or artifacts. It is this classicality of recorded descriptions that both motivates and requires LOCC as a characterization of the interaction of multiple observers with a quantum, i.e., physical system.
Under the assumption of universality, understanding the requirements of LOCC in the case of either Alice or Bob individually clearly requires understanding quantum measurement, and in particular understanding whether observational classicality can be supposed to “emerge” from the dynamics specified by HU. If the observed system S is regarded as a quantum information processor, this question of observational classicality becomes the question of how the behavior of S can be interpreted as computation. How, for example, do the unitary transformations of the quantum state of a quantum Turing machine (QTM, [12]) or Hamiltonian oracle [13] implement a computation on a classical data structure encoded by the system’s initial state? In what sense do the events that occur between measurements in a measurement-based quantum computer [14] implement computation? That these questions are both foundational to quantum computing and non-trivial has been emphasized by Aaronson [15].
What the LOCC concept adds to the quantum measurement problem as traditionally presented (e.g., [10,16]) is the requirement that two observers interact with the same system, and then moreover interact, via a communicated message, with each other. Understanding LOCC, therefore, requires understanding measurement as both a redundant or repeatable process and as a social process; with the exception of some discussions of Wigner’s friend, neither of these aspects of LOCC are considered in traditional accounts of single-observer measurement. It will be shown below, in Section 3, Section 4, Section 5 respectively, that the theoretical issues raised by these additional considerations are non-trivial.
2.2. Consequence: Measurements are Actions by POVMs
If quantum theory is universal, measurements can be represented by POVMs. A POVM is a collection {Ei} of positive-semidefinite Hilbert-space automorphisms that have been normalized so as to sum to unity; POVMs generalize traditional projective measurements (e.g., [17]) by dropping the requirement of orthogonality and hence the requirement that all elements of a measurement project onto the same Hilbert-space basis. If Information 03 00809 i012 is a POVM representing a measurement of the state of some quantum system S, then each component Information 03 00809 i013 is a Hilbert-space automorphism on Information 03 00809 i001, i.e., Information 03 00809 i014; one can also write Information 03 00809 i015, where in general Information 03 00809 i016. Given the assumption of universality, it is clear that any such automorphism must be implemented by the unitary physical propagator Information 03 00809 i017acting on the universal Hilbert space Information 03 00809 i009, and hence on Information 03 00809 i002 as a collection of components of some universal state Information 03 00809 i010. Hence a measurement can be thought of as a physical action by a POVM, as emphasized for example by Fuchs’ [18] depiction of a POVM as an observer’s prosthetic hand.
Treating a POVM as a collection of Hilbert-space automorphisms does not, however, capture the sense in which observations extract classical information from quantum systems. To see how POVMs model measurement, it is useful to return to the case of a POVM with mutually orthognal components, i.e., a von Neumann projection {Πi} defined on a Hilbert space Information 03 00809 i018. Each component Πj of a von Neumann projection {Πi} projects any state Information 03 00809 i019 onto a basis vector Information 03 00809 i020 of Information 03 00809 i018. If the set { Information 03 00809 i021} of images of the components of {Πi} is complete in the sense of spanning Information 03 00809 i018, one can write Information 03 00809 i022 for states Information 03 00809 i019. In this case a general Hermitian observable M can be written Information 03 00809 i024 where αj is the jth possible observable outcome of M acting on Information 03 00809 i025. Hence from an observer’s point of view, what a projection {Πi} produces is not just a new state vector, but a real outcome value αj; {Πi} is not just a Hilbert-space automorphism, but is also a mapping from Information 03 00809 i018 to the set of real outcome values of some observable of interest.
A general POVM Information 03 00809 i012 can be thought of as a mapping from Information 03 00809 i001 to a set of real outcome values with two caveats. First, the components of a general POVM are not necessarily orthogonal and hence do not, in general, all project to the same basis. Second, any finite observer can explicitly represent, and hence can physically encode in a classical memory or communication medium, values with at most some finite number N of bits. Hence from an observer’s point of view, a component Information 03 00809 i026 of a general POVM Information 03 00809 i012 is not just an automorphism on Information 03 00809 i001; it is also a mapping Information 03 00809 i027, where Information 03 00809 i028 is the set of binary codes of length N and {basis}S is the set of bases of Information 03 00809 i001 [3]. Indeed, any collection Information 03 00809 i029 of mappings Information 03 00809 i030 for which the probabilities P(αj) of obtaining real outcome values αj Information 03 00809 i028 sum to unity, and for which each of the components Information 03 00809 i031 is implementable by the unitary physical propagator Information 03 00809 i017 acting on the universal Hilbert space Information 03 00809 i009 must be positive semi-definite (to yield real outcome values), normalized (to yield well-defined probabilities) and be a collection of Hilbert-space automorphisms (to be implementable by Information 03 00809 i017); hence such a collection must be a POVM. The POVM formalism thus represents the extraction of classical information from quantum systems in the only way that it can be represented while maintaining consistency with the universality assumption.
The assumption that all measurements can be represented by POVMs clearly does not entail that an observer can explicitly write down the components of every POVM that he or she might deploy in the course of interacting with the world. Doing so in any particular case would require both a complete specification of the outcome values obtainable with that POVM and a complete specification of the Hilbert space upon which it acts, or as discussed below, a complete specification of the inverse image in Information 03 00809 i009 of its set of obtainable outcome values. Such a specification would, for any particular POVM and hence any particular Hilbert space, require scientific investigation of the physical system represented by that Hilbert space to be complete. Classical theorems [19,20] restricting the completeness of system identification strongly suggest that such completeness is infeasible in principle. Hence explicitly-specified POVMs can at best be viewed as predictively-adequate approximations based on experimental investigations carried out thus far; in practice such POVMs are available only for systems with small numbers of (known or stipulated) degrees of freedom.
2.3. Consequence: Observers must Identify the Systems They Observe
When a new graduate student enters a laboratory, he or she is introduced to the various items of apparatus that the laboratory employs. The reason for this ritual is obvious: The student cannot be expected to reliably report the state of a particular apparatus if he or she cannot identify that apparatus. Traditional discussions of quantum measurement take the ability of observers to identify items of apparatus for granted. For example, Ollivier, Poulan and Zurek define “objectivity” for physical systems operationally as follows:
“A property of a physical system is objective when it is:
• simultaneously accessible to many observers,
• who are able to find out what it is without prior knowledge about the system of interest, and
• who can arrive at a consensus about it without prior agreement.”
(p. 1 of [4]; p. 3 of [5])
4,5], about how observers are able to “access” a physical system “without prior knowledge” of such state variables as its location, size or shape, and without “prior agreement” about which item in their shared environment constitutes the system of interest. To find the identification of physical systems by observers treated explicitly, one must look to cybernetics, where unique identification of even classical finite-state machines (FSMs) by finite sequences of finite observations is shown to be impossible in principle [19,20], or to the cognitive neuroscience of perception, where the identification in practice of individual systems over extended periods of time is recognized as a computationally-intensive heuristic process [21,22,23].
In practice, observers identify items of laboratory apparatus by finite sets of classically-specified criteria: location, size, shape, color, overall appearance, laboratory-affixed labels, brand name. These criteria are encodable as finite binary strings. If quantum theory is universal, items of laboratory apparatus are quantum systems, and hence are characterizable by Hilbert spaces comprising their quantum degrees of freedom. Observing a laboratory apparatus, therefore, requires deploying an operator that maps a collection of quantum degrees of freedom to a finite set of finite binary strings; by the reasoning above, such operators can only be POVMs. Identifying a system of interest clearly requires observing it; hence an observer can only identify a system of interest by deploying a POVM. Call POVMs deployed to identify systems of interest “system-identifying” POVMs. For simplicity, a system-identifying POVM can be regarded as yielding as output just the conventionalized name of the system it identifies, e.g., “S” or “the Canberra® Ge(Li) detector” [3].
The formal definition of system-identifying POVMs is complicated by two related issues. First, the vast majority of systems identified by human observers are characterized, like laboratory apparatus are characterized, not by possible outcome values of their quantum degrees of freedom, but by possible outcome values of bulk degrees of freedom such as macroscopic size or shape. The exceptions—the systems that those who reject the universality of quantum theory consider to be the only bona fide “quantum systems”—are systems defined by particular values of quantum degrees of freedom, as electrons or the Higgs boson are currently defined within the Standard Model, or are systems defined by certain observable behaviors of macroscopic apparatus, as electrons were defined in the late 19th century. The second complication is that observers, as emphasized by Zurek [24,25] and others, typically interact not with systems of interest themselves, but with their surrounding environments. While in the case of macroscopic systems such as laboratory apparatus this environment may be treated using a straightforward approximation, for example as the ambient photon field, in the case of either microscopic or very distant systems it is complicated by the inclusion of laboratory apparatus; our interactions with presumptive Higgs bosons, for example, are via an environment containing the ATLAS [26] or CMS [27] detectors. These complicating issues are not significantly simplified by considering non-human observers; the components of such observers that record classical records are, with the exception of such things as blocks of plastic that record the passage of cosmic rays, almost as distant from the microscopic events to which their records refer as are their human minders.
In recognition of the role of the intervening environment in the observation and hence identification of systems of interest, it has been proposed that system-identifying POVMs be defined, in general, over either the physically-implemented information channel with which an observer interacts (i.e., the observer’s environment) [3] or over the universe U as a whole [28]. The latter definition is adopted here, as it simplifies the description of LOCC by allowing two or more observers to be regarded as deploying the same system-identifying POVM. Defining system-identifying POVMs over all of U acknowledges, moreover, the actual epistemic position of any finite observer. Observations are information-transferring actions by the observer’s environment on the observer. Without a complete, deterministic theory of the behavior of U, such actions cannot be predicted precisely; without sufficient recording capacity to record the state of every degree of freedom of U at the instant of observation, such actions cannot be replicated precisely. Any finite observer can, therefore, at best predict or retrodict only approximately and heuristically what degrees of freedom of U might be causally responsible for any particular episode of observation. An observer can, however, be sure that such degrees of freedom are within U, so defining system-identifying POVMs over U can be viewed as an exercise of epistemic conservatism.
Defining system-identifying POVMs over U as a whole does not render observations nonlocal. Any finite observer must expend finite energy to record the outcomes obtained by deploying a POVM; hence any observation requires finite time. Any finite observer can, moreover, deploy a POVM for only a finite time. A finite observer can, therefore, regard a system-identifying POVM—or any POVM—as extracting classical information from at most a local volume with a horizon at cΔt, where Δt is the period of observation. Quantum information may originate outside this volume by entanglement, but such entanglement is undetectable in principle by the observer. Alice can only regard classical information extracted from a quantum system employed as a communication channel as a message from Bob if Bob is in her light-cone; LOCC requires timelike, not spacelike, separation of observers.
Defining system-identifying POVMs over U as a whole does not, moreover, resolve the question of how such POVMs—or how any POVMs—can yield outcome values for bulk degrees of freedom such as macroscopic size or shape. This question is, clearly, the question of quantum measurement itself; in particular, it is the question of the “emergence of classicality” that is taken up in Section 4 below.
3. Decompositional Equivalence and Its Consequences
3.1. Assumption: Our Universe Exhibits Decompositional Equivalence
A fundamental requirement of observational objectivity, and hence of science as practiced, is that reality is independent of the language chosen to describe it. This fundamental assumption that reality is independent of the descriptive terms and hence the semantics chosen by observers—in particular, human observers—underlies the assumption in scientific practice that any arbitrary collection of physical degrees of freedom can be stipulated to be a “system of interest” and named with a symbol such as “S” without this choice of language affecting either fundamental physical laws or their outcomes as expressed by the dynamical behavior of the degrees of freedom contained within S. It similarly underlies the assumption that, given the technological means, an experimental apparatus to investigate the behavior of S can be designed and constructed without altering either fundamental physical laws or the dynamical behavior of the degrees of freedom contained within S. These assumptions operate prior to apparatus-dependent experimental interventions into the behavior of S, and hence prior to observations of S, both logically and, in the course of practical investigations of microscopic degrees of freedom by means of macroscopic apparatus, temporally.
This fundamental assumption that reality is independent of semantics can be generalized to state an assumed dynamical symmetry: The universal dynamics HU is asumed to be independent of, and hence symmetric under arbitrary modifications of, boundaries drawn in Information 03 00809 i009 by specifications of tensor product structures. Call this symmetry decompositional equivalence [3]. Stated formally, decompositional equivalence is the assumption that if a TPS S ⊗ E = S′ ⊗ E′ = U, then the dynamics HU = HS + HE + HS−E = HS′ + HE′ + HS′ − E′, where S and S′ are arbitrarily chosen collections of physical degrees of freedom, E and E′ are their respective “environments” and HS − E and HS′ − E′ are, respectively, the S − E and S′ − E′ interaction Hamiltonians. Such equivalence of TPSs of Information 03 00809 i009 can be alternatively expressed in terms of the linearity of HU: If HU =∑ij Hij where the indices i and j range without restriction over all quantum degrees of freedom within Information 03 00809 i009, decompositional equivalence is the assumption that the interaction matrix elements ( Information 03 00809 i032 do not depend on the labels assigned to collections of degrees of freedom by specifications of TPSs. Decompositional equivalence is thus consistent with the general philosophical position of microphysicalism (for a recent review, see [29]), but involves no claims about explanatory reduction, and indeed no claims about explanation at all; it requires only that emergent properties of composite objects exactly supervene, as a matter of physical fact, on the fundamental interactions of the microscale components of those objects.
As is the assumption that quantum theory is universal, the assumption that the universe satisfies decompositional equivalence is an empirical assumption. Its empirical content is most obvious in its formulation as the assumption that interaction matrix elements ( Information 03 00809 i032 do not depend on specifications of TPSs. This is an assumption that the pairwise interaction Hamiltonians Hij are not just independent of where and when the degrees of freedom labeled by i and j interact, but are also independent of any other classical information that might be included in the specification of a reference frame from which the interaction of i and j might be observed. As such, it is similar in spirit to Tegmark’s “External Reality Hypothesis (ERH)” that “there exists an external physical reality completely independent of us humans” ([30] p. 101). If taken literally, however, the ERH violates energy conservation, as it allows human beings to behave arbitrarily without affecting “external physical reality” and vice-versa. The assumption of decompositional equivalence, on the other hand, does not involve, entail, or allow decoupling of observers or any other systems from their environments; any evidence that energy is not conserved, or evidence that energy is conserved but not additive would be evidence that decompositional equivalence is not satisfied in our universe. Were our universe to fail in fact to satisfy decompositional equivalence, any shift in specified system boundaries—any change in the TPS of Information 03 00809 i009—could be expected to alter fundamental physical laws or their dynamical outcomes; in such a universe, the notions of “fundamental physical laws” and “well-defined dynamics” would be effectively meaningless. It is, therefore, assumed in what follows that decompositional equivalence is in fact satisfied in our universe U, and hence that the dynamics HU is independent of system boundaries.
3.2. Consequence: System-Environment Decoherence can have No Physical Consequences
The assumption of decompositional equivalence has immediate, but largely unremarked, consequences in two areas: The characterization of system-environment decoherence and the characterization of system identification by observers. Let us consider decoherence first. The usual understanding of system-environment decoherence (e.g., [24,25,31,32]) is that interactions between a system S and its environment E, where S ⊗ E = U is a TPS of Information 03 00809 i009, select eigenstates of the S − E interaction HS−E. Such environmentally-mediated superselection or einselection [33,34] assures that observations of S that are mediated by information transfer through E will reveal eigenstates of HS−E; in the canonical example, observations of macroscopic objects mediated by information transfer through the ambient visible-spectrum photon field reveal eigenstates of position. From this perspective, it is the quantum mechanism of einselection that underlies the classical notion that the “environment” of a system—whether this refers to the ambient environment or to an experimental apparatus—objectively encodes the physical state of the system, where “objectively” has the sense given in the Ollivier–Poulin–Zurek definition [4,5] quoted in Section 2.3.
Two features of this standard account of decoherence deserve emphasis. First, the idea that the environment einselects particular eigenstates of S in an observer-independent way—that environmental einselection depends only on HS−E, where both S and E are specified completely independently of observers—allows decoherence to mimic “collapse” as a mechanism by which the world prepares or creates classical information about particular systems that observers can then detect. In this picture, as in the traditional Copenhagen picture, observers have nothing to do with what “systems” are available to observe: The world—in the decoherence picture, the environment—reveals some systems as “classical” and not others. The sense of “objectivity” defined by Ollivier, Poulin and Zurek [4,5] depends critically on this assumption; without it, the idea that observers can approach the world “without prior knowledge” of the systems it contains becomes uninterpretable. The second thing to note is that the formal mechanism of “tracing out the environment” in decoherence calculations [24,25,31,32] corresponds physically to an assumption that environmental degrees of freedom are irrelevant to the system-observer interaction, i.e., to an assumption that the physical interaction HS−O, where O is the observer, is independent of E. This assumption straightforwardly conflicts with the idea that observation—the S − O interaction—is mediated by E. This conflict between the formalism of decoherence and its model theory suggests that the trace operation is at best an approximate mathematical representation of the physics of decoherence.
By definition, einselection depends on the Hamiltonian HS−E, which is defined at the boundary, in Hilbert space, between S and E [33,34]. In a universe that satisfies decompositional equivalence, this boundary can be shifted arbitrarily without affecting the interactions between quantum degrees of freedom, i.e., without affecting the interaction Hij, and hence without affecting the matrix element Information 03 00809 i032, between any pair of degrees of freedom i and j within U. An arbitrary boundary shift, in other words, has no physical consequences. In particular, a boundary shift that transforms S ⊗ E into an alternative TPS S′ ⊗ E′ has no physical consequences for the values of matrix elements ( Information 03 00809 i032 where i and j are degrees of freedom within the intersection E ∩ E′, and hence has no physical consequences for states of E ∩ E′ or for the classical information that such states encode. The encodings within E ∩ E′ of arbitrary states of S and S′, and hence of einselected pointer states of S and S′ are, therefore, entirely independent of the boundaries of these systems, and hence entirely independent of the Hamiltonians HS−E and HS′−E′ defined at those boundaries. The encoding of information about S in E is, in other words, entirely a result of the action of HU = ∑ij Hij , and is entirely independent of specified system boundaries or “emergent” system-environment interactions definable at such specified boundaries.
It has been proposed, under the rubric of “quantum Darwinism” [6,7], that environmental “witnessing” of the pointer states of particular macroscopic systems by einselection explains the observer-independent “emergence into classicality” of such systems, and hence explains the observer-independent existence of the “classical world” of ordinary human experience (see also [10,31,32]). In a universe satisfying decompositional equivalence, the einselection of pointer states as eigenstates of system-environment interactions cannot, as shown above, be a physical mechanism, and hence cannot underpin an observer-independent “objective” [4,5] encoding of classical information about some particular systems at the expense of classical information about the states of other possible systems in such a universe. In a universe satisfying decompositional equivalence, the shared environment encodes the states of all possible embedded systems, or none at all. The notion that environmental witnessing and quantum Darwinism explain the “emergence of classicality” collapses in a universe satisfying decompositional equivalence, as both require that einselection physically and observer-independently encode the states of some but not all “systems” in the state of E [28].
The physics of continuous fluid flow provides a simple example of decompositional equivalence and its consequences for einselection. It is commonplace to describe fluid flow in terms of deformable voxels, stipulated to be cubic at some initial time t0, that contain some particular collection of molecules. The stipulation of such a voxel has no effect on the intermolecular interactions between the molecules composing the fluid, whether these molecules are within, outside, or on opposite sides of the boundary of the voxel. Stipulation of a voxel boundary immediately defines, however, a Hamiltonian Hin −out that describes the bulk interaction between the molecules within the voxel and those outside. This bulk interaction can be viewed as decohering the collective quantum state Information 03 00809 i033 of the molecules within the voxel, with a decoherence time at room temperature and pressure of substantially less than 10−20 s [35], and as einselecting Information 03 00809 i033 as an eigenstate of position within the fluid at all subsequent times. Such einselection prevents the wavefunction Information 03 00809 i034 from spreading into a macroscopically-extended spatial superposition, just as decoherence and einselection by interplanetary dust, gasses and radiation prevent the wavefunction of Hyperion from doing so [36]. Does the state of the fluid outside the stipulated voxel objectively encode the position of the continuously-deforming voxel boundary at which this einselection takes place? Could observers with no prior knowledge of the stipulated voxel boundary determine its position by observing the state of the fluid? Obviously they could not.
The situation with bulk material objects appears, intuitively, to be different from the fluid-flow situation just described. When viewed in terms of pairwise interactions between the quantum degrees of freedom of individual atoms, however, the intuitive difference vanishes. Consider a uniform sphere of Pb embedded in a solid mass of Plexiglas® plastic. The interatomic interactions between Pb, C, O and H atoms are completely independent of whether the Pb sphere, the Pb sphere together with a surrounding spherical shell of plastic, a voxel of Pb entirely within the Pb sphere, or a voxel containing only plastic is considered the “system of interest.” The boundary of the system stipulated, in each of these cases, is the site of action of a Hamiltonian Hin−out that describes the bulk interaction between the atoms within the stipulated boundary and those outside; the action of this Hamiltonian einselects positional eigenstates of the collective quantum state of the atoms inside the boundary just as it does in the case of a voxel boundary in a fluid. Observers of the states of some arbitrary sample of the atoms in the plastic part of this combined system would, however, be no more capable of determining the site of a stipulated boundary than observers of some arbitrary sample of the fluid molecules in the previous example.
As a final example, consider observers of the experimental apparatus employed by Brune et al. [37] to follow the decoherence of single Rb atoms within an ion trap. Would an observer unfamiliar with the design or purpose of this apparatus, for example a new graduate student, who observed the behavior of its externally-accessible degrees of freedom—either quantum degrees of freedom or bulk macroscopic degrees of freedom such as pointer positions or readouts from digital displays—be capable of inferring the boundary between the trapped Rb atoms and the apparatus itself, including the magnetic and various electromagnetic fields it generates? Clearly not. The boundary between the quantum system comprising the trapped Rb atoms and the quantum system comprising the internal radiative degrees of freedom is stipulated by theory, and this theory must be understood to interpret the behavior of the apparatus as a measurement of decoherence time. Observers of such an apparatus, in other words, must have prior knowledge of the system they are observing and must have prior agreements about what the bulk macroscopic states of the system indicate—about what the characters displayed on the readouts mean, for example—to comprehend the operation of the apparatus. The criteria for “objectivity” offered by Ollivier, Poulan and Zurek [4,5] and quoted in Section 2.3 above fail utterly in this case, just as they do for the “objectivity” of voxel boundaries in fluids or the intuitively “obvious” boundary of a Pb sphere embedded in plastic. As in the previous examples, what counts as the boundary of the “system of interest” contained within an ion trap is established by an agreed convention among the observers, one that can be changed arbitrarily without changing the physical dynamics occurring within the ion trap in any way.
If decoherence has no physical consequences for interaction matrix elements, it can have no physical consequences for entanglement. The total entanglement in a quantum universe satisfying decompositional equivalence is, therefore, strictly conserved. Measurements, in particular, cannot physically destroy entanglement, and hence cannot create von Neumann entropy. The state Information 03 00809 i010 can, in this case, be considered to be a pure quantum state with von Neumann entropy of zero at all times. This situation is in stark contrast to that of a universe in which decompositional equivalence is violated, i.e., a universe in which the dynamics do depend on system boundaries, either via a physical process of “wave-function collapse” driven by measurement or a physical and therefore ontological “emergence” of bounded systems driven by decoherence. In this latter kind of universe, entanglement is physically destroyed by decoherence and von Neumann entropy objectively increases. A countervailing physical process that creates entanglement, either between measurements or in regions of weak decoherence, and hence decreases von Neumann entropy must be postulated to prevent such a universe from solidifying into an objectively classical system, a kind of system that our universe demonstrably is not.
3.3. Consequence: Identification of Systems by Observers is Intrinsically Ambiguous
While they cannot, without violating decompositional equivalence, physically destroy entanglement, observations nonetheless have real-valued outcomes that can be recorded in classical data structures and reported by one observer to another using classical communication. If the “systems” that these outcome values describe cannot be assumed to be specified for observers by decoherence and environmental witnessing, they must be specified by observers themselves, by the deployment of system-identifying POVMs. It was argued in Section 2.3 above that both the role of the environment in mediating observations and the de facto epistemic position of finite observers support defining system-identifying POVMs not over the particular sets of quantum degrees of freedom—the particular Hilbert spaces and thus TPSs of U—corresponding to recordable outcome values, but over U as a whole. With the assumption of decompositional equivalence, this broad approach to defining POVMs becomes not just advisable but inescapable. If system boundaries can be shifted arbitrarily without physical consequences, they can be shifted arbitrarily without consequences for the recording of observed outcome values in physical media. Hence the outcome values recorded following deployment of a POVM must be independent of arbitrary shifts of the boundary within Information 03 00809 i009, and hence in the TPS of U, over which the POVM is defined. This can only be the case if the POVM is not defined over one component of a fixed TPS, but rather over all of Information 03 00809 i009.
Recall that any finite observer is restricted to a finite encoding of the outcomes obtained with any POVM; any POVM can be considered a mapping to binary codes of some finite length N. This condition can be met by composing an arbitrary POVM {Ei} with a nonlinear function Information 03 00809 i035 such that:
Information 03 00809 i054
Information 03 00809 i036. Defining any POVM {Ei} over all of Information 03 00809 i009 as in (1) renders the definition of “system” implicit: A system S is whatever returns finite outcome values Information 03 00809 i037 when acted upon by some POVM Information 03 00809 i012 composed with Information 03 00809 i035. The detectable degrees of freedom of such a system are, at some time t, the degrees of freedom in the inverse images Information 03 00809 i038 of the components Information 03 00809 i039 for which Information 03 00809 i040 at t.
In general, many TPSs of Information 03 00809 i009 will satisfy (1) for any given Information 03 00809 i012; the collections of quantum degrees of freedom represented by the “system” components of these TPSs will be indistinguishable in principle by an observer deploying Information 03 00809 i012. Observations in any universe satisfying decompositional equivalence thus satisfy a symmetry, called “observable-dependent exchange symmetry” in [38]: Any two systems S and T for which a POVM Information 03 00809 i012 returns identical sets of outcome values when composed with Information 03 00809 i035 can be exchanged arbitrarily without affecting observations carried out using Information 03 00809 i012. To borrow an example from [38], many distinct radioactive sources may appear identical to an observer equipped only with a Geiger counter. It is shown in [38] that all observational consequences of the no-cloning theorem, the Kochen–Specker theorem and Bell’s theorem follow from observable-dependent exchange symmetry. Decompositional equivalence is sufficient, therefore, for the universe to appear quantum-mechanical, not classical, to finite observers whose means of collecting classical information can be represented by POVMs.
By imposing observable-dependent exchange symmetry on observers, the assumption of decompositional equivalence removes the final sense in which observational classicality might be regarded as objective classicality: Two observers who record the same outcomes can no longer infer that their respective POVMs have detected the same collection of quantum degrees of freedom. As observable-dependent exchange symmetry applies, in principle, to all quantum systems, it applies not just to the “systems of interest” to which classically communicated outcome values refer, but to the physical media into which such outcome values are encoded. The “measurement problem” in the current framework is thus the problem of explaining not only how discrete outcome values are obtained from quantum systems, but how classical data structures encoding such values are implemented by the collections of quantum degrees of freedom that constitute communication channels, including the collections of quantum degrees of freedom that constitute the apparently-classical memories of observers. The measurement problem in this formulation is thus the full problem of understanding LOCC. This formulation of the measurement problem is similar to those encountered in the multiple worlds [9], multiple minds [39] or consistent histories [8] formulations of quantum theory, all of which assume purely unitary evolution; however, it rejects the implicit ontological assumption, common to these standard approaches, that “systems” and hence TPSs can be regarded as constants across “branches” or histories, and therefore rejects the assumption that “classical communication” can be taken for granted as being physically unproblematic.
4. Decoherence as Semantics
4.1. Decoherence as Implemented by a POVM
If decoherence is not a physical process by which the environment creates classical information for observers, what is it? It is suggested in [3], and shown in detail in [28] that decoherence can be self-consistently and without circularity viewed as a purely informational process, a model-theoretic or semantic mapping from quantum states to classical information. It is, therefore, reasonable to think of decoherence as implemented by a POVM. To see this, it is useful to reconceptualize observation not as the collection by observers of pre-existing classical information, but as a dynamical outcome of the continuous action by the environment on the physical degrees of freedom composing the observer. If an arbitrary system S interacts with its environment E via a Hamiltonian HS−E, a POVM Information 03 00809 i012 can be defined as a mapping:
Information 03 00809 i023
i labels degrees of freedom of S and k and j label degrees of freedom of E. This POVM maps each degree of freedom of E to the real normalized sum of its matrix elements, and hence to its total coupling, with the degrees of freedom of S, and hence naturally represents the encoding of Information 03 00809 i002 in Information 03 00809 i041. It thus takes the slogan “decoherence is continuous measurement by the environment” literally.
In a universe that satisfies decompositional equivalence, the meanings of “S” and “E” in (2) can be shifted arbitrarily provided S ⊗ E = U. Suppose an observer O deploys a POVM {Ei} defined over U, such that the inverse image Im−1Ek is outside O for all components Ek for which Information 03 00809 i040. In this case, O can be considered the “system” and ∪k(Im−1Ek) ⊂ U where Information 03 00809 i040 can be considered the “environment” in (2); the Hamiltonian Hik then characterizes the observer-environment interaction, and encodes classical information—the outcome values αk—about ∪k(Im−1Ek) into Information 03 00809 i042. Hence (2) provides a general definition of decoherence as the deployment of a POVM by an observer. For observers embedded in a relatively static environment, for which the total observer-environment interaction ∑ik Hik is nearly constant, (2) is reasonably interpreted as defining a single, continuously-deployed POVM. For observers embedded in highly-variable environments that nonetheless exhibit some periodicity, as most human observers are, it is reasonable to view (2) as describing the deployment of not one but a periodic sequence of POVMs, each normalized over a subset of the environmental degrees of freedom with which O interacts. As such a sequence must be finite for a finite observer, a finite observer can only be viewed as decohering his, her or its environment in a finite number of ways. Hence unlike the “environment as witness”, a finite observer as witness can physically encode the states of at most a finite number of distinct “systems”. Because the POVMs encoded by finite observers are limited in their resolution by Information 03 00809 i035, each of the distinct “systems” representable by a finite observer is in fact an equivalence class under observable-dependent exchange symmetry.
Using (2), any collection of Hilbert-subspace boundaries that enclose disjoint collections of degrees of freedom and hence define distinct “systems” Sµ can be represented by a collection of distinct POVMs Information 03 00809 i043. The detectable outcome values produced by these POVMs have non-overlapping inverse images; hence they all mutually commute. If these POVMs are regarded as all acting at each of a sequence of times ti, their outcomes at those times can be considered to be a sequence of real vectors Information 03 00809 i044. These vectors form a consistent decoherent history of the Sµ at the ti, in the sense defined by Griffiths [8]. In a universe in which decoherence is an informational process, the number of such consistent decoherent histories and hence the number of “classical realms” [40] is limited only by the number of distinct sets of subspaces of Information 03 00809 i009, i.e., is combinatorial in the number of degrees of freedom of Information 03 00809 i009. Each of these histories, as a discrete time sequence of real vectors, can be regarded as a sequential sample of the state transitions of a classical finite state machine (FSM; [19]). As shown by Moore [20], no finite sequence of observations of an FSM is sufficient to uniquely identify the FSM; hence no finite sample of any decoherent history is sufficient to identify the TPS boundaries at which the POVMs contributing to the history are defined, confirming the observable-dependent exchange symmetry of observations in a universe satisfying decompositional equivalence.
4.2. Decoherence Defines a Virtual Machine
A classical virtual machine is an abstract machine representable by an algorithm executed on a classical Turing machine [41,42]; any executable item of software, from an operating system to a word processor or a numerical simulation, defines a virtual machine. An execution trace of a virtual machine V is the sequence of state transitions that V executes from a some given input state. Any classical FSM is a classical virtual machine; hence any finite sequence of observations made with a POVM can be represented as an execution trace of a classical virtual machine. Considering that an arbitrary algorithm A can be employed to choose which of a collection of mutually-commuting POVMs to deploy at a given time point tk, it is clear that any consistent decoherent history of U can be represented as an execution trace of a classical virtual machine. Hence decoherence can, in general, be represented as a mapping of Information 03 00809 i009 to the space of classical virtual machines, i.e., by a diagram such Figure 1; as such a mapping takes quantum states to classical information, it can be represented as a POVM {Ei}. The requirement that this diagram commutes is the requirement that the action of the physical propagator Information 03 00809 i017 eacting from tn to tn+1 is represented, by the mapping {Ei}, as a classical state transition from the nth to the (n + 1)th state of some virtual machine V. This commutativity requirement is fully equivalent to the commutativity requirement that defines consistency of observational histories of U (e.g., [8] Equation 10.20). Hence an evolution HU is consistent under a decoherence mapping {Ei} if it can be interpreted as an implementation of a classical virtual machine.
Information 03 00809 g001 1024
Figure 1. Semantic relationship between physical states of U and einselected virtual states Information 03 00809 i045 of a virtual machine V implemented by U. Commutativity of this diagram assures that the decoherence mapping {Ei} is consistent.
Click here to enlarge figure
Information 03 00809 g001 1024
The semantic relationship shown in Figure 1 is familiar: It is the relationship by which the behavior of any physical device is interpreted as computation, i.e., as execution of an algorithm characterized as an abstract virtual machine V. Any consistent decoherence mapping can, therefore, be regarded as an interpretation of the time evolution of U as classical computation. As the outcome values returned by any mapping {Ei} deployed by a finite observer must be collected within a finite time, any such mapping interprets only some local sample of the time evolution of U as computation. This perspective on decoherence is consistent with the cybernetic intuition—the intuition expressed by the Church–Turing thesis—that any classical dynamical process, and in particular any classical communicative process can be represented algorithmically.
5. Observation as Entanglement
5.1. Classical Communication is Regressive
We can now return to Alice and Bob, who each perform local observations of a quantum system and then exchange their results by classical communication. If the dynamics in U exhibit decompositional equivalence, Alice and Bob cannot rely on decoherence by their shared environment to uniquely identify the system of interest; instead they must each rely on their own POVM to identify it. Observable-dependent exchange symmetry prevents them, moreover, from determining by observation that they have identified the same system of interest; given (2), they cannot determine without observational access to all degrees of freedom of U whether they are deploying the same system-identifying POVM. Under these conditions, what is the meaning of LOCC?
The first thing to note is that any answer to this question that relies on prior agreements between Alice and Bob is straightforwardly regressive, and hence incapable of explaining anything. How, for example, do Alice and Bob know which POVM to deploy in order to perform a joint observation? How, in other words, do observers coordinate their observations, independently of whether they manage to observe a single, shared system? There are two possibilities, as illustrated in Figure 2. One involves classical communication: In line with the canonical scenario, some third party presents each observer with a qubit, and instructs them on how to observe it. The other, more in line with laboratory practice, involves Alice and Bob jointly observing the production of the pair, and then each transporting one of the qubits to a separate site for further observation. This second option reduces the problem of selecting the correct POVM to employ for the subsequent observations to the problem of resolving the joint system-identification ambiguity when the production of S is jointly observed.
From the perspective of the observers, the two processes illustrated in Figure 2 both involve the receipt of classical information at t1 and its use in directing observations at t2; they differ only in the source of the information received at t1. As noted earlier, however, the only means of obtaining classical information provided by quantum theory is the deployment of a POVM. The two processes differ, therefore, only in which POVM the observers deploy at t1: In (A) they each deploy a POVM that identifies and determines the state of the “classical source,” while in (B) they each deploy a POVM that identifies and determines the state of S. Hence the coordination question asked at t2 can also be asked at t1; even if the intrinsic ambiguity of observations with POVMs is ignored, the LOCC scenario cannot get off the ground without an agreement between the observers about which POVM to deploy at t1.
In order to reach an agreement about which POVMs to deploy at t1, the observers must exchange classical information. Each observer must, therefore, deploy a POVM that enables the acquisition of classical information from the other; call Alice’s POVM for acquiring information from Bob “ Information 03 00809 i046” and Bob’s POVM for acquiring information from Alice “ Information 03 00809 i047”, and suppose that these POVMs are deployed at some time t0. Clearly the same question can be asked at t0 as at t2 and t1, and clearly it cannot be answered by postulating yet another agreement, another classical communication, and another deployment of POVMs. The same kind of regress infects any simple joint assumption by Alice and Bob that they are observing the same system, an assumption that must be communicated to be effective. Any instance of measurement under LOCC conditions, in other words, requires the postulation of a priori classical communication between the observers, and hence requires that the observers themselves be regarded as classically objective a priori. Minimal quantum mechanics with decompositional equivalence provides no mechanism by which such a priori classical objectivity can be achieved; hence minimal quantum mechanics with decompositional equivalence does not support LOCC. At best, minimal quantum mechanics with decompositional equivalence supports the appearance of LOCC in cases in which observers agree to treat their observations as observations of the same system.
Information 03 00809 g002 1024
Figure 2. Two options for coordinating the selection of POVMs Information 03 00809 i048 and Information 03 00809 i049 Bob by Alice and Bob, respectively. (A) Alice and Bob receive POVM selection instructions from a classical source. (B) Alice and Bob jointly observe the production of S and agree that their selected POVMs identify it.
Click here to enlarge figure
Information 03 00809 g002 1024
The regress of classical communications encountered here is equivalent to the regress of the von Neumann chain that motivates the adoption of “collapse” as a postulate of quantum mechanics [17]. Following Everett [9], the usual response to this regress in the context of minimal quantum mechanics is to postulate observation-induced “branching” between the multiple possible outcomes at each instant of observation, with the resulting “branches” being regarded as equally “actual” either as physically-realized classical universes (e.g., [43,44]) or as classical information-encoding states of a branching observer’s consciousness (e.g., [39]). In either case, inter-branch decoherence is regarded as conferring observational classicality, and the identity of observed systems across branches is taken for granted; hence decompositional equivalence and observable-dependent exchange symmetry are both violated by the standard Everettian picture. The concept of branching does not, moreover, explain how classical outcomes are encoded by the physical degrees of freedom that implement observers; it therefore leaves open the question of how the communication of classical information is possible.
5.2. Memory is Communication
The second thing to note regarding LOCC is that the physical implementation of any classical memory, whether it comprises words written on a page or neural excitation patterns in someone’s brain, is a quantum system. Physically accessing a classical memory requires extracting classical information from this quantum system, and hence requires deploying a POVM. Observable-dependent exchange symmetry assures that an observer cannot be confident that the physical degrees of freedom accessed with a “memory-accessing” POVM are the same physical degrees of freedom that were accessed when a memory was encoded, or on any previous occasion when the memory was read. Hence Bob’s predicament when accessing his own memory of an observation is no different from Alice’s predicament when accessing a report from Bob; in both cases, all the usual caveats pertaining to quantum measurement apply.
The requirement that classical memories be observed in order to function as memories renders the LOCC scenario descriptive of all reportable or even recallable observations by single observers. When John Wheeler said “no phenomenon is a physical phenomenon until it is an observed phenomenon” (quoted in [45] p. 191), he might as well have said that no phenomenon is a physical phenomenon until it is an observed and reported phenomenon, at least reported to the observer himself/herself/itself via recall from memory. It is reporting that renders observational results classical. In this sense, observational classicality is intrinsically public, or social; without an observer to access a report of an observation, there is no evidence that the observation has been classically recorded. Hence explaining the appearance of LOCC can be considered to be equivalent to explaining the ability of a single observer to interpret a physical state, including a physical state of his/her/its own memory system, as a classical report of a previous observation.
5.3. Implementation of POVMs by HU
Let us suppose that Alice obtains a report from Bob simply by observing his state Information 03 00809 i050. If Alice is to regard a state Information 03 00809 i051 of Bob as a report, i.e., as classically encoding a state Information 03 00809 i052 of some identified external system S, it must be possible, at least in principle, for her to establish that a counterfactual-supporting classical correlation—a classical correlation that exists whether observed or not—between Information 03 00809 i052 and Information 03 00809 i051 is maintained by the B − S interaction and hence, given decompositional equivalence, by HU. The action of HU maintains a counterfactual-supporting classical correlation between states of S and B just in case S and B are entangled; if the correlation that is maintained is perfect, S and B must be monogamously entangled. Whether joint states of two identified systems appear to be entangled is, however, dependent on the choice of basis and hence the POVM deployed to determine their joint states [46,47,48,49,50]. Bob’s state Information 03 00809 i051 is, therefore, a classical encoding of Information 03 00809 i052 for Alice only if she deploys a POVM that projects Information 03 00809 i010 onto a Hilbert-space bases in which Information 03 00809 i053 is entangled, and is a perfectly classical encoding if this apparent entanglement is monogamous.
To say of any observer O that “O deploys Information 03 00809 i012 to identify S” is, therefore, just to say that O and S are entangled by the action of HU on the quantum degrees of freedom that implement O and S: Observation is entanglement. The existence of such entanglement is an objective fact that is, in a universe satisfying decompositional equivalence, independent of the boundaries of S and O. Whether S and O appear to be entangled to a third-party observer, however, is not an objective fact; it rather depends on the POVM employed by that observer to extract classical information from the degrees of freedom implementing S and O. Hence while the classical correlation between S and O is “real”—i.e., physical, a result of the action of HU—whether it appears classical to third parties is virtual, i.e., dependent on semantic interpretation. All public communication is, therefore, nonfungible or “unspeakable” in the sense defined in [51]: The information communicated is always strictly relative to a POVM—a “reference frame” in the language of [51]—that is not specified by HU and cannot be assumed without circularity. Any publicly-communicable classical description of the world is, therefore, intrinsically logically circular.
The intrinsic circularity of public classical communication renders an explanation of a shared classical world in terms of fundamental physics unattainable. The shared classical world of ordinary experience cannot, therefore, be regarded as “emergent” from fundamental physics alone; instead it must be thought of as stipulated by the choice of a POVM, i.e., as stipulated by observers themselves. From a practical point of view, however, a shared POVM is a shared item of experimental apparatus. The conclusion that classical communication is entanglement therefore raises the possibility of discovering an item of apparatus that implements a POVM capable of revealing, to third-party observers, the entanglement that transfers classical information from S to O in any particular instance. With such an apparatus, it would be possible to claim a third-party understanding of the local action of HU that implements any particular instance of classical communication.
6. Conclusions
As Bohr [52] often emphasized, physicists must rely on language, pictures, and other conventionalized tools of human communication to construct descriptions of the world. They must, moreover, rely on measurements conducted in finite regions of space and time. The acquisition and communication of classical information is, therefore, always pursued in a LOCC setting. What has been examined here is the question of how such communication can be understood in terms of basic physics: Minimal quantum mechanics together with decompositional equivalence. What has been shown is that classical communication is quantum entanglement that results deterministically from the action of HU. Such entanglement is not publicly accessible to multiple observers without the further specification of a POVM. Any such specification is, however, itself an item of classical information; hence any claim that classical communication “emerges” from quantum entanglement involves logical circularity. The idea that quantum theory can produce a shared classicality—can be an “ultimate theory that needs no modifications to account for the emergence of the classical” ([53] p. 1)—therefore cannot be maintained. This loss of “emergent classicality” is, however, balanced by a powerful gain: The possibility that a POVM can be discovered that will reveal, in particular cases, the entanglement by which the transfer of classical information from system to observer is implemented.
The dependence of physics on model-theoretic or semantic assumptions explored here ties physics explicitly to classical computer science: the selection of a shared POVM that enables quantum theory to get off the ground as a description of a shared observable world is fully equivalent to the selection of a virtual-machine description that enables the description of a physical process as the instantiation of a classical algorithm to get off the ground. All physical descriptions are, from this point of view, specifications of classical virtual machines. What distinguishes “quantum” from “classical” computation is the choice of a POVM. The increased efficiency of quantum computation is, therefore, not the result of a different kind of device executing a different kind of behavior, but rather the result of a different choice of description. Castagnoli [54,55] has shown that executions of quantum algorithms can be understood as executions of classical algorithms in which half of the required answer is known up front; what the current analysis suggests is that this half of the required answer is encoded by the POVM with which the initial state of a quantum computation is defined.
1. Nielsen, M.A.; Chaung, I.L. Quantum Information and Quantum Computation; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
2. Landauer, R. Information is a physical entity. Physica A 1999, 263, 63–67. [Google Scholar] [CrossRef]
3. Fields, C. If physics is an information science, what is an observer? Information 2012, 3, 92–123. [Google Scholar]
4. Ollivier, H.; Poulin, D.; Zurek, W.H. Objective properties from subjective quantum states: Environment as a witness. Phys. Rev. Lett. 2004, 93, 220401:1–220401:4. [Google Scholar]
5. Ollivier, H.; Poulin, D.; Zurek, W.H. Environment as a witness: Selective proliferation of information and emergence of objectivity in a quantum universe. Phys. Rev. A 2005, 72, 042113:1–042113:21. [Google Scholar]
6. Blume-Kohout, R.; Zurek, W.H. Quantum Darwinism: Entanglement, branches, and the emergent classicality of redundantly stored quantum information. Phys. Rev. A 2006, 73, 062310:1–062310:21. [Google Scholar]
7. Zurek, W.H. Quantum Darwinism. Nat. Phys. 2009, 5, 181–188. [Google Scholar] [CrossRef]
8. Griffiths, R.B. Consistent Quantum Theory; Cambridge University Press: New York, NY, USA, 2002. [Google Scholar]
9. Everett, H., III. “Relative state” formulation of quantum mechanics. Rev. Mod. Phys. 1957, 29, 454–462. [Google Scholar] [CrossRef]
10. Landsman, N.P. Between classical and quantum. In Handbook of the Philosophy of Science: Philosophy of Physics; Butterfield, J., Earman, J., Eds.; Elsevier: Amsterdam, the Netherland, 2007; pp. 417–553. [Google Scholar]
11. Schlosshauer, M. Experimental motivation and empirical consistency of minimal no-collapse quantum mechanics. Ann. Phys. 2006, 321, 112–149. [Google Scholar] [CrossRef]
13. Farhi, E.; Gutmann, F. An analog analogue of a digital quantum computation. Phys. Rev. A 1996, 57, 2403–2406. [Google Scholar] [CrossRef]
14. Briegel, H.J.; Browne, D.E.; Dür, W.; Raussendorf, R.; van den Nest, M. Measurement-based quantum computation. Nat. Phys. 2009, 5, 19–26. [Google Scholar]
15. Aaronson, S. NP-complete problems and physical reality. Available online: (accessed on 6 December 2012).
16. Wallace, D. Philosophy of quantum mechanics. In The Ashgate Companion to Contemporary Philosophy of Physics; Rickles, D., Ed.; Ashgate Publisher: Aldershot, UK, 2008; pp. 16–98. [Google Scholar]
17. von Neumann, J. Mathematische Grundlagen der Quantenmechanik; Springer: Berlin, Germany, 1932. [Google Scholar]
18. Fuchs, C.A. QBism: The perimeter of quantum Bayesianism. Available online: (accessed on 6 December 2012).
19. Ashby, W.R. An Introduction to Cybernetics; Chapman and Hall: London, UK, 1956. [Google Scholar]
20. Moore, E.F. Gedankenexperiments on sequential machines. In Autonoma Studies; Shannon, C.W., McCarthy, J., Eds.; Princeton University Press: Princeton, NJ, USA, 1956; pp. 129–155. [Google Scholar]
21. Rips, L.; Blok, S.; Newman, G. Tracing the identity of objects. Psychol. Rev. 2006, 133, 1–30. [Google Scholar]
22. Scholl, B.J. Object persistence in philosophy and psychology. Mind Lang. 2007, 22, 563–591. [Google Scholar] [CrossRef]
23. Fields, C. The very same thing: Extending the object token concept to incorporate causal constraints on individual identity. Adv. Cogn. Psychol. 2012, 8, 234–247. [Google Scholar]
24. Zurek, W.H. Decoherence, einselection and the existential interpretation (the rough guide). Philos. Trans. R. Soc. A 1998, 356, 1793–1821. [Google Scholar] [CrossRef]
25. Zurek, W.H. Decoherence, einselection, and the quantum origins of the classical. Rev. Mod. Phys. 2003, 75, 715–775. [Google Scholar] [CrossRef]
26. Aad, G.; Abajyan, T.; Abbott, B.; Abdallah, J.; Abdel-Khalek, S.; Abdelalim, A.A.; Abdinov, O.; Abenm, R.; Abi, B.; Abolins, M.; et al. Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. Phys. Lett. B 2012, 716, 1–29. [Google Scholar] [CrossRef]
27. CMS Collaboration. Combined results of searches for the standard model Higgs boson in pp collisions at √s = 7 TeV. Phys. Lett. B 2012, 710, 26–48.
28. Fields, C. A model-theoretic interpretation of environmentally-induced superselection. Int. J. Gen. Syst. 2012, 41, 847–859. [Google Scholar] [CrossRef]
29. Hu, B.L. Emergence: Key physical issues for deeper philosophical inquiries. J. Phys. Conf. Ser. 2012, 361. [Google Scholar] [CrossRef]
30. Tegmark, M. The mathematical universe. Found. Phys. 2008, 38, 101–150. [Google Scholar] [CrossRef]
31. Schlosshauer, M. Decoherence, the measurement problem, and interpretations of quantum theory. Rev. Mod. Phys. 2004, 76, 1267–1305. [Google Scholar] [CrossRef]
32. Schlosshauer, M. Decoherenceand the Quantum to Classical Transition; Springer: Berlin, Germany, 2007. [Google Scholar]
33. Zurek, W.H. Pointer basis of the quantum apparatus: Into what mixture does the wave packet collapse? Phys. Rev. D 1981, 24, 1516–1525. [Google Scholar] [CrossRef]
34. Zurek, W.H. Environment-induced superselection rules. Phys. Rev. D 1982, 26, 1862–1880. [Google Scholar] [CrossRef]
35. Joos, E.; Zeh, D. The emergence of classical properties through interaction with the environment. Z. Phys. B 1985, 59, 223–243. [Google Scholar] [CrossRef]
36. Zurek, W.H. Decoherence, chaos, quantum-classical correspondence, and the algorithmic arrow of time. Phys. Scr. 1998, 76, 186–198. [Google Scholar] [CrossRef]
37. Brune, M.; Haglet, E.; Dreyer, J.; Maitre, X.; Maali, A.; Wunderlich, C.; Raimond, J.M.; Haroche, S. Observing the progressive decoherence of the meter in a quantum measurement. Phys. Rev. Lett. 1996, 77, 4887–4890. [Google Scholar]
38. Fields, C. Bell’s theorem from Moore’s theorem. Int. J. Gen. Syst. Available online: (accessed on 10 December 2012). in press.
39. Zeh, D. The problem of conscious observation in quantum mechanical description. Found. Phys. Lett. 2000, 13, 221–233. [Google Scholar] [CrossRef]
40. Hartle, J.B. The quasiclassical realms of this quantum universe. Found. Phys. 2008, 41, 982–1006. [Google Scholar] [CrossRef]
41. Tanenbaum, A.S. Structured Computer Organization; Prentice Hall: Englewood Cliffs, NJ, USA, 1976. [Google Scholar]
42. Hopcroft, J.E.; Ullman, J.D. Introduction to Automata, Languages and Computation; Addison-Wesley: Boston, MA, USA, 1979. [Google Scholar]
43. Wallace, D. Decoherence and ontology. In Many Worlds? Everett, Quantum Theory and Reality; Saunders, S., Barrett, J., Kent, A., Wallace, D.D., Eds.; Oxford University Press: Oxford, UK, 2010; pp. 53–72. [Google Scholar]
44. Tegmark, M. Many worlds in contex. In Many Worlds? Everett, Quantum Theory and Reality; Saunders, S., Barrett, J., Kent, A., Wallace, D.D., Eds.; Oxford University Press: Oxford, UK, 2010; pp. 553–581. [Google Scholar]
45. Scully, R.J.; Scully, M.O. The Demon and the Quantum: From the Pythagorean Mystics to Maxwell’s Demon and Quantum Mystery; Wiley: New York, NY, USA, 2007. [Google Scholar]
46. Zanardi, P. Virtual quantum subsystems. Phys. Rev. Lett. 2001, 87, 077901:1–077901:4. [Google Scholar]
47. Zanardi, P.; Lidar, D.A.; Lloyd, S. Quantum tensor product structures are observable-induced. Phys. Rev. Lett. 2004, 92, 060402:1–060402:4. [Google Scholar]
48. de la Torre, A.C.; Goyeneche, D.; Leitao, L. Entanglement for all quantum states. Eur. J. Phys. 2010, 31, 325–332. [Google Scholar]
49. Harshman, N.L.; Ranade, K.S. Observables can be tailored to change the entanglement of any pure state. Phys. Rev. A 2011, 84, 012303:1–012303:4. [Google Scholar]
50. Thirring, W.; Bertlmann, R.A.; Köhler, P.; Narnhofer, H. Entanglement or separability: The choice of how to factorize the algebra of a density matrix. Eur. Phys. J. D 2011, 64, 181–196. [Google Scholar]
51. Bartlett, S.D.; Rudolph, T.; Spekkens, R.W. Reference frames, superselection rules, and quantum information. Rev. Mod. Phys. 2007, 79, 555–609. [Google Scholar]
52. Bohr, N. The quantum postulate and the recent developments of atomic theory. Nature 1928, 121, 580–590. [Google Scholar]
53. Zurek, W.H. Relative states and the environment: Einselection, envariance, quantum darwinism, and the existential interpretation. Available online: (accessed on 10 December 2012).
54. Castagnoli, G. Quantum correlation between the selection of the problem and that of the solution sheds light on the mechanism of the speed up. Phys. Rev. A 2010, 82, 052334:1–052334:8. [Google Scholar]
55. Castagnoli, G. Probing the mechanism of the quantum speed-up by time-symmetric quantum mechanics. Available online: (accessed on 10 December 2012). |
428d64d930678520 |
Please see
The One-Dimensional Finite-Difference Time-Domain (FDTD) Algorithm Applied to the Schrödinger Equation
The code below illustrates the use of the FDTD algorithm to solve the one-dimensional Schrödinger equation for simple potentials. It only requires Numpy and Matplotlib.
All the mathematical details are described in this PDF: Schrodinger_FDTD.pdf
In these figures the potential is shaded in arbitrary units in yellow, while the total energy of the wavepacket is plotted as a green line, in the same units as the potential. So while the energy units are not those on the left axis, both energy plots use the same units and can thus be validly compared relative to one another.
Depending on the particle energy, the yellow region may be classically forbidden (when the green line is inside the yellow region).
The wavepacket starts at t=0 as (step potential shown):
And at the end of the simulation it can look like this, depending on the actual potential height:
schrod_step_lo_sm2.png schrod_step_hi_sm2.png
This illustrates the tunneling through a thin barrier, depending on the barrier height. In the second case, a classical particle would completely bounce off since its energy is lower than the potential barrier:
schrod_barrier_lo_sm2.png schrod_barrier_hi_sm2.png
# Quantum Mechanical Simulation using Finite-Difference
# Time-Domain (FDTD) Method
# This script simulates a probability wave in the presence of multiple
# potentials. The simulation is c arried out by using the FDTD algorithm
# applied to the Schrodinger equation. The program is intended to act as
# a demonstration of the FDTD algorithm and can be used as an educational
# aid for quantum mechanics and numerical methods. The simulation
# parameters are defined in the code constants and can be freely
# manipulated to see different behaviors.
# NOTES
# The probability density plots are amplified by a factor for visual
# purposes. The psi_p quanity contains the actual probability density
# without any rescaling.
# BEWARE: The time step, dt, has strict requirements or else the
# simulation becomes unstable.
# The code has three built-in potential functions for demonstration.
# 1) Constant potential: Demonstrates a free particle with dispersion.
# 2) Step potential: Demonstrates transmission and reflection.
# 3) Potential barrier: Demonstrates tunneling.
# By tweaking the height of the potential (V0 below) as well as the
# barrier thickness (THCK below), you can see different behaviors: full
# reflection with no noticeable transmission, transmission and
# reflection, or mostly transmission with tunneling.
# This script requires pylab and numpy to be installed with
# Python or else it will not run.
# Author: James Nagel <>
# 5/25/07
# Updates by Fernando Perez <>, 7/28/07
# Numerical and plotting libraries
import numpy as np
import pylab
# Set pylab to interactive mode so plots update when run outside ipython
# Utility functions
# Defines a quick Gaussian pulse function to act as an envelope to the wave
# function.
def Gaussian(x,t,sigma):
""" A Gaussian curve.
x = Variable
t = time shift
sigma = standard deviation """
return np.exp(-(x-t)**2/(2*sigma**2))
def free(npts):
"Free particle."
return np.zeros(npts)
def step(npts,v0):
"Potential step"
v = free(npts)
v[npts/2:] = v0
return v
def barrier(npts,v0,thickness):
"Barrier potential"
v = free(npts)
v[npts/2:npts/2+thickness] = v0
return v
def fillax(x,y,*args,**kw):
"""Fill the space between an array of y values and the x axis.
All args/kwargs are passed to the pylab.fill function.
Returns the value of the pylab.fill() call.
xx = np.concatenate((x,np.array([x[-1],x[0]],x.dtype)))
yy = np.concatenate((y,np.zeros(2,y.dtype)))
return pylab.fill(xx, yy, *args,**kw)
# Simulation Constants. Be sure to include decimal points on appropriate
# variables so they become floats instead of integers.
N = 1200 # Number of spatial points.
T = 5*N # Number of time steps. 5*N is a nice value for terminating
# before anything reaches the boundaries.
Tp = 50 # Number of time steps to increment before updating the plot.
dx = 1.0e0 # Spatial resolution
m = 1.0e0 # Particle mass
hbar = 1.0e0 # Plank's constant
X = dx*np.linspace(0,N,N) # Spatial axis.
# Potential parameters. By playing with the type of potential and the height
# and thickness (for barriers), you'll see the various transmission/reflection
# regimes of quantum mechanical tunneling.
V0 = 1.0e-2 # Potential amplitude (used for steps and barriers)
THCK = 15 # "Thickness" of the potential barrier (if appropriate
# V-function is chosen)
# Uncomment the potential type you want to use here:
# Zero potential, packet propagates freely.
#POTENTIAL = 'free'
# Potential step. The height (V0) of the potential chosen above will determine
# the amount of reflection/transmission you'll observe
POTENTIAL = 'step'
# Potential barrier. Note that BOTH the potential height (V0) and thickness
# of the barrier (THCK) affect the amount of tunneling vs reflection you'll
# observe.
#POTENTIAL = 'barrier'
# Initial wave function constants
sigma = 40.0 # Standard deviation on the Gaussian envelope (remember Heisenberg
# uncertainty).
x0 = round(N/2) - 5*sigma # Time shift
k0 = np.pi/20 # Wavenumber (note that energy is a function of k)
# Energy for a localized gaussian wavepacket interacting with a localized
# potential (so the interaction term can be neglected by computing the energy
# integral over a region where V=0)
E = (hbar**2/2.0/m)*(k0**2+0.5/sigma**2)
# Code begins
# You shouldn't need to change anything below unless you want to actually play
# with the numerical algorithm or modify the plotting.
# Fill in the appropriate potential function (is there a Python equivalent to
# the SWITCH statement?).
if POTENTIAL=='free':
V = free(N)
elif POTENTIAL=='step':
V = step(N,V0)
elif POTENTIAL=='barrier':
V = barrier(N,V0,THCK)
raise ValueError("Unrecognized potential type: %s" % POTENTIAL)
# More simulation parameters. The maximum stable time step is a function of
# the potential, V.
Vmax = V.max() # Maximum potential of the domain.
dt = hbar/(2*hbar**2/(m*dx**2)+Vmax) # Critical time step.
c1 = hbar*dt/(m*dx**2) # Constant coefficient 1.
c2 = 2*dt/hbar # Constant coefficient 2.
c2V = c2*V # pre-compute outside of update loop
# Print summary info
print 'One-dimensional Schrodinger equation - time evolution'
print 'Wavepacket energy: ',E
print 'Potential type: ',POTENTIAL
print 'Potential height V0: ',V0
print 'Barrier thickness: ',THCK
# Wave functions. Three states represent past, present, and future.
psi_r = np.zeros((3,N)) # Real
psi_i = np.zeros((3,N)) # Imaginary
psi_p = np.zeros(N,) # Observable probability (magnitude-squared
# of the complex wave function).
# Temporal indexing constants, used for accessing rows of the wavefunctions.
PA = 0 # Past
PR = 1 # Present
FU = 2 # Future
# Initialize wave function. A present-only state will "split" with half the
# wave function propagating to the left and the other half to the right.
# Including a "past" state will cause it to propagate one way.
xn = range(1,N/2)
x = X[xn]/dx # Normalized position coordinate
gg = Gaussian(x,x0,sigma)
cx = np.cos(k0*x)
sx = np.sin(k0*x)
psi_r[PR,xn] = cx*gg
psi_i[PR,xn] = sx*gg
psi_r[PA,xn] = cx*gg
psi_i[PA,xn] = sx*gg
# Initial normalization of wavefunctions
# Compute the observable probability.
psi_p = psi_r[PR]**2 + psi_i[PR]**2
# Normalize the wave functions so that the total probability in the simulation
# is equal to 1.
P = dx * psi_p.sum() # Total probability.
nrm = np.sqrt(P)
psi_r /= nrm
psi_i /= nrm
psi_p /= P
# Initialize the figure and axes.
xmin = X.min()
xmax = X.max()
ymax = 1.5*(psi_r[PR]).max()
# Initialize the plots with their own line objects. The figures plot MUCH
# faster if you simply update the lines as opposed to redrawing the entire
# figure. For reference, include the potential function as well.
lineR, = pylab.plot(X,psi_r[PR],'b',alpha=0.7,label='Real')
lineI, = pylab.plot(X,psi_i[PR],'r',alpha=0.7,label='Imag')
lineP, = pylab.plot(X,6*psi_p,'k',label='Prob')
pylab.title('Potential height: %.2e' % V0)
# For non-zero potentials, plot them and shade the classically forbidden region
# in light red, as well as drawing a green line at the wavepacket's total
# energy, in the same units the potential is being plotted.
if Vmax !=0 :
# Scaling factor for energies, so they fit in the same plot as the
# wavefunctions
Efac = ymax/2.0/Vmax
V_plot = V*Efac
pylab.plot(X,V_plot,':k',zorder=0) # Potential line.
fillax(X,V_plot, facecolor='y', alpha=0.2,zorder=0)
# Plot the wavefunction energy, in the same scale as the potential
pylab.legend(loc='lower right')
# I think there's a problem with pylab, because it resets the xlim after
# plotting the E line. Fix it back manually.
# Direct index assignment is MUCH faster than using a spatial FOR loop, so
# these constants are used in the update equations. Remember that Python uses
# zero-based indexing.
IDX1 = range(1,N-1) # psi [ k ]
IDX2 = range(2,N) # psi [ k + 1 ]
IDX3 = range(0,N-2) # psi [ k - 1 ]
for t in range(T+1):
# Precompute a couple of indexing constants, this speeds up the computation
psi_rPR = psi_r[PR]
psi_iPR = psi_i[PR]
# Apply the update equations.
psi_i[FU,IDX1] = psi_i[PA,IDX1] + \
c1*(psi_rPR[IDX2] - 2*psi_rPR[IDX1] +
psi_i[FU] -= c2V*psi_r[PR]
psi_r[FU,IDX1] = psi_r[PA,IDX1] - \
c1*(psi_iPR[IDX2] - 2*psi_iPR[IDX1] +
psi_r[FU] += c2V*psi_i[PR]
# Increment the time steps. PR -> PA and FU -> PR
psi_r[PA] = psi_rPR
psi_r[PR] = psi_r[FU]
psi_i[PA] = psi_iPR
psi_i[PR] = psi_i[FU]
# Only plot after a few iterations to make the simulation run faster.
if t % Tp == 0:
# Compute observable probability for the plot.
# Update the plots.
# Note: we plot the probability density amplified by a factor so it's a
# bit easier to see.
# So the windows don't auto-close at the end if run outside ipython
SciPy: Cookbook/SchrodingerFDTD (last edited 2015-10-24 17:48:23 by anonymous) |
8893edfd3a651988 | A Maple package RATH which outputs entirely automatically tanh-polynomial travelling solitary wave solutions of a given nonlinear evolution equation is presented. The effectiveness of RATH is demonstrated by applications to a variety of equations with physical interest as examples. Not only are previously known solutions recovered but in some cases new solutions and more general form of solutions are obtained. (Source: http://cpc.cs.qub.ac.uk/summaries/)
References in zbMATH (referenced in 82 articles , 2 standard articles )
Showing results 1 to 20 of 82.
Sorted by year (citations)
1 2 3 4 5 next
1. Elboree, Mohammed K.: Hyperbolic and trigonometric solutions for some nonlinear evolution equations (2012)
2. El-Ganaini, Shoukry: Applications of He’s variational principle and the first integral method to the Gardner equation (2012)
3. Ge, Juhong; Hua, Cuncai; Feng, Zhaosheng: A method for constructing traveling wave solutions to nonlinear evolution equations (2012)
4. Nuseir, Ameina S.: New exact solutions to the modified Fornberg-Whitham equation (2012)
5. Zhao, Lei; Huang, Dingjiang; Zhou, Shuigeng: A new algorithm for automatic computation of solitary wave solutions to nonlinear partial differential equations based on the Exp-function method (2012)
6. Kavitha, L.; Akila, N.; Prabhu, A.; Kuzmanovska-Barandovska, O.; Gopi, D.: Exact solitary solutions of an inhomogeneous modified nonlinear Schrödinger equation with competing nonlinearities (2011)
7. Sekulić, Dalibor L.; Satarić, Miljko V.; Živanov, Miloš B.: Symbolic computation of some new nonlinear partial differential equations of nanobiosciences using modified extended tanh-function method (2011)
8. Taghizadeh, Nasir; Mirzazadeh, Mohammad; Farahrooz, Foroozan: Exact travelling wave solutions of the coupled Klein-Gordon equation by the infinite series method (2011)
9. Yang, Hongwei; Yin, Baoshu; Dong, Huanhe: Frobenius integrable decompositions for high-order nonlinear evolution equations (2011)
10. Feng, Yang; Wang, Dan; Li, Wen-Ting; Zhang, Hong-Qing: More solutions of the auxiliary equation to get the solutions for a class of nonlinear partial differential equations (2010)
11. Gao, Xianwen; Liu, Jun; Li, Zitian: New exact kink solutions, solitons and periodic form solutions for the Gardner equation (2010)
12. Kudryashov, Nikolay A.: A new note on exact complex travelling wave solutions for $(2+1)$-dimensional B-type Kadomtsev-Petviashvili equation (2010)
13. Li, Hongzhe; Tian, Bo; Li, Lili; Zhang, Haiqiang: Painlevé analysis and Darboux transformation for a variable-coefficient Boussinesq system in fluid dynamics with symbolic computation (2010)
14. Liu, Cheng-Shi: Applications of complete discrimination system for polynomial for classifications of traveling wave solutions to nonlinear differential equations (2010)
15. Wang, Jing: Some new and general solutions to the compound KdV-Burgers system with nonlinear terms of any order (2010)
16. Wazwaz, Abdul-Majid; Mehanna, Mona S.: A variety of exact travelling wave solutions for the $(2+1)$-dimensional Boiti-Leon-Pempinelli equation (2010)
17. Zhang, Huiqun: Application of the $(\fracG'G)$-expansion method for the complex KdV equation (2010)
18. Zhang, Huiqun: A note on exact complex travelling wave solutions for $(2+1)$-dimensional B-type Kadomtsev-Petviashvili equation (2010)
19. Zhang, Jiao; Wei, Xiaoli; Hou, Jingchen: Symbolic computation of exact solutions for the compound KdV-Sawada-Kotera equation (2010)
20. Zhou, Zhen-jiang; Fu, Jing-zhi; Li, Zhi-bin: Maple packages for computing Hirota’s bilinear equation and multisoliton solutions of nonlinear evolution equations (2010)
1 2 3 4 5 next |
97efb1d75f731796 |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Is there anything in the physics that enforces the wave function to be $C^2$? Are weak solutions to the Schroedinger equation physical? I am reading the beginning chapters of Griffiths and he doesn't mention anything.
share|cite|improve this question
Related: , and links therein. – Qmechanic Jan 17 '12 at 23:57
Related: – Emilio Pisanty Feb 11 '15 at 17:56
Here we want to show that there is an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation
$$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \qquad\qquad (1)$$
tend to be rather nice. First rewrite eq. (1) in integral form
$$ \psi(x)~=~ \frac{2m}{\hbar^2} \int^{x}\mathrm{d}y \int^{y}\mathrm{d}z\ (V(z)-E)\psi(z) .\qquad\qquad (2)$$
There are various cases.
1. Case $V \in {\cal L}^2_{\rm loc}(\mathbb{R})$ is a locally square integrable function. Assume the wavefunction $\psi \in {\cal L}^2_{\rm loc}(\mathbb{R})$ as well. Then the product $(V-E)\psi\in {\cal L}^1_{\rm loc}(\mathbb{R})$ due to Cauchy–Schwarz inequality. Then the integral $y\mapsto \int^{y}\mathrm{d}z\ (V(z)-E)\psi(z)$ is continuous, and hence the wavefunction $\psi$ on the lhs. of eq. (2) is smooth $\psi\in C^{1}(\mathbb{R}).$
2. Case $V \in C^{p}(\mathbb{R})$ for a non-negative integer $p\in\mathbb{N}_0$. Similar bootstrap argument shows that $\psi\in C^{p+2}(\mathbb{R}).$
The above two cases do not cover a couple of often-used mathematically idealized potentials $V(x)$, e.g.,
1. the infinite wall $V(x)=\infty$ in some region. (The wavefunction must vanish $\psi(x)=0$ in this region.)
2. or a Dirac delta distribution $V(x)=V_0\delta(x)$. See also here.
share|cite|improve this answer
Some of this was discussed elsewhere. See « significance of unbounded operators » .
It is not true the wave function has to be continuous, it just has to be measurable (i.e., a limit of step functions almost everywhere). Naturally you might wonder what sense Schroedinger's equation makes if you apply it to a step function...but the answer is easier than worrying about distributional weak solutions. The point is that you can solve the time-dependent Schroedinger equation with the exponential $$e^{itH},$$ which is a family of unitary operators, and which is better behaved than the $H$ you have to use in Schroedinger's equation. The $H$ you have to use, for example $$-{\partial ^2\over\partial x^2} + \mathrm{other\ stuff}, $$ is unbounded. And non-differentiable functions are not in its domain. But plugging it in to the power series for exponential converges in norm anyway, and so the resulting operator, being bounded and even unitary on a dense domain of the Hilbert space, can be extended painlessly to the entire space, even step functions. So it makes more sense to say that the solution to Schroedinger's equation with a given initial condition $\psi_o$ is $$\psi_t (x) = e^{itH}\cdot \psi_o (x)$$ and there is no need to bring in distributional weak solutions. These considerations are called the Stone--von Neumann theorem.
But such functions are not very important and indeed it is possible to do all of Quantum Mechanics with smooth functions, especially if you take the attitude that, for example, a square well potential would also be unphysical and is really just a simplified approximation of a physical potential which smoothed off those square corners but had a formula that was unmanageable.... See Anthony Sudbery, Quantum Mechanics and the Particles of Nature, which since it is written by a mathematician, is careful about unimportant issues like this.
That family of operators I wrote down is called the time-evolution operators, and they are an example of a unitary group with one-parameter, time. It is easy to see that if $\psi_o$, the initial condition, the state of the quantum system at time $t=0$, is nice and smooth, then all the future states will be nice and smooth too. Furthermore, all the usual quantum observables have eigenstates which are nice and smooth, so if you perform a future measurement, you will get a function which is nice and smooth and its future time evolution will remain that way, until the next measurement, etc. until Doomsday.
That said, for all practical purposes you may assume all wave functions are smooth and that the only reason you study discontinuous ones is as convenient approximations.
The comment one sometimes hears is that a wave function that was not in the domain of the Hamiltonian would « have infinite energy » but this is nonsense. In Quantum Mechanics, you are not allowed to talk about a quantum system as having a definite value of an observable unless it is in an eigenstate of that observable. What you can ask is, what would be the expectation of that observable. If the wave function $\psi$ is discontinuous and not in the domain of the Hamiltonian, it cannot be an eigenstate, but if its energy is measured, the answer will always be finite. Yet, the expectation of its energy does not exist, or you could say, the expectation « is infinite ». Not the energy, its expectation. There is nothing very unphysical about this because expectation itself is not very directly physical: you cannot measure the expectation unless you make infinitely many measurements, and your estimated answer, even for this discontinuous function, will always be a finite expectation. It's just that those estimates are way inaccurate, the expectation really is infinite (like the Cauchy distribution in statistics).
But even for such a « bad » wavefunction, all the axioms of Quantum Mechanics apply: the probability that the energy, if measured, will be 7 erg, is calculated the usual way. But these bad wave functions never arise in elementary systems or exercises so most people think they are « unphysical ». And, as I said, if the initial condition is a « good » wave function, the system will never evolve outside of that. This, I think, is connected with the fact that in QM, all systems have a finite number of degrees of freedom: this would no longer be true for quantum systems with infinitely many degrees of freedom such as are studied in Statistical Mechanics.
share|cite|improve this answer
Right, there's nothing wrong about step functions, delta-functions (the derivatives of the former), and others, and that's why physicists freely work with them and never mention artificial mathematical constraints. Still, some discontinuities may make the kinetic energy infinity, so they don't exist in the finite-energy spectrum. I would add that the most natural space of functions to consider is $L^2$, all square-integrable functions. They may be Fourier-transformed or converted to other (discrete...) bases. A subset also has a finite (expectation value of) energy. – Luboš Motl Jan 18 '12 at 7:22
The time-independent Schroedinger equation for the position-space wavefunction has the form $$\left(\frac{-\hbar^2}{2m}\nabla^2 +(V-E) \right)\Psi=0$$
Where $E$ is the energy of that particular eigenstate, and $V$ in general depends on the position. All physical wavefunctions must be in some superposition of states that satisfy this equation.
At least in nonrelativistic QM, the wavefunction is not allowed to have infinte energy. If the second derivative of the wavefunction does not exist or is infinite, it implies that either $V$ has some property that "cancels out" the discontinutiy (as in the infinite square well), or that the wavefunction is continuous and differentiable everywhere.
Generally, $\Psi$ must always be continuous, and any spatial derivative of $\Psi$ must exist unless $V$ is infinite at that point.
share|cite|improve this answer
Your Answer
|
e6ef435bac41dd5f | Natural Science Seminar Abstracts 2005-2006
Sept. 5, 2005
Elizabeth Wunker
Nest-site Selection of the Piping Plover
Mentor: Dr. Louise Weber
Abstract: The Piping Plover (Charadrius melodus) is a small shorebird endemic to North America. These birds nest along the shores of rivers and lakes in the Great Lakes and Great Plains and along the coast and bays of the Atlantic. In New York the piping plover is federally threatened and state endangered. One New York site that has nesting piping plovers is the Incorporated Village of West Hampton Dunes. Currently, the Village is undergoing a 30-year nourishment project in which the Army Corps of Engineers pumps sand onto the beach every five years. Nourishment has raised the question of whether the number of nesting piping plovers can be increased by adding more than just sand to the beaches. The objective of this study was to determine whether piping plovers select nest sites depending on the amount of sand, shell, cobble, and vegetation compared to random sites on three Westhampton Island sites. The study was conducted on Shinnecock beach, Westhampton beach, and the Incorporated Village of West Hampton Dunes between April and July of 2004. Thirty-two nest and corresponding random points were found and substrate data was collected using a 1 m2 grid with 36 intersections. The substrate under each intersection was recorded. SAS was used to run a multiple logistical regression test on eighteen selected models. Three models were found to be stronger than the null but not significantly (p > 0.05) different from random points. They included (1) percent cover by grass and total large shell (2) percent sand, and (3) total large shell. None of the models were strong enough to recommend the addition of other substrates to the sand during nourishment. I would recommend that the Army Corps of Engineers maintain their current nourishment methods of laying down just sand.
September 19, 2005
Sarah Zane Lewis
The Role of the Unique Region of the Surrogate Light Chain in Autoimmunity
Mentors: Dr. James Baleja, Dr. B. David Stollar and Dr. Jeffery Holmes
Abstract: Autoimmune diseases are a broad category of systemic illness caused by an immune response to self molecules. In the case of systemic lupus erythematosous (SLE), disease is marked by the production of anti-DNA antibodies. The origin of autoreactive antibodies, or autoantibodies, is not fully understood. Antibodies are produced by plasma B cells, once antigen is encountered via the B cell receptor. One component of the B cell receptor, the heavy chain, is known to be autoreactive if not accompanied by a light chain. In a pre-B cell receptor, the surrogate light chain may act in place of the light chain to reduce autoreactivity by the heavy chain. The surrogate light chain (VpreB) exhibits a non-light-chain-like unique region that has been removed by mutation with little effect. This raises questions about the purpose of this non-immunoglobulin-like region. The objective of this study was to determine whether the surrogate light chain (VpreB) affects the autoreactivity of the heavy chain. A second objective was to determine the extent to which the unique region of surrogate light chain protein, VpreB, plays a role in the binding of ligand. In this study, the ssDNA, poly(dT), was used as a marker for autoreactivity. A modified sandwich type ELISA was used to determine the amount of ssDNA bound by VpreB, wild type and mutant, with and without the heavy chain domain, VH19. Three assays were completed with similar results. The surrogate light chain was not found to lower the autoreactivity of the heavy chain. The absorbance of both proteins increased when incubated together as the pre-B cell receptor complex. Additionally, the surrogate light chain mutant, VpreBDU-J, was found to have an absorbance greater than the heavy chain alone. This suggests that the surrogate light chain may not play a role in inhibiting ligand binding by the heavy chain. The data also suggest that the unique region has an inhibitory function, previously undiscovered. Further study could determine whether the unique region inhibits the binding of self molecules, reducing the autoreactivity of the heavy chain at the early pre-B cell stage. Continued study of the unique region and the potential relationship to autoreactivity will broaden current understanding of the function of the surrogate light chain and the role of the pre-B cell receptor in development and autoimmune disease.
September 19, 2005
Erik Nash
The relationship between Vicia sativa and the ants feeding at their extrafloral nectaries on the Warren Wilson campus.
Mentor: Dr. Amy Boyd
Abstract: Vicia sativa, a pea like annual, native to the United Kingdom is found all around the Warren Wilson Campus from early April to late May. This plant has nectar-secreting organs, called extrafloral nectaries (efns), located on the underside of each stipule. It has been noticed that an unusually large number of ants are living on or near by the V. sativa plants. Ants and plants are known to form symbiotic mutualisms in which the ant receives nectar from the efns and the plant receives protection from the ant. A symbiotic mutualism is defined as a relationship in which two species, living in close proximity to one another, both benefit from their relationship. The objective of this study was to determine if a symbiotic mutualism existed between V. sativa and the ants found feeding at their efns on the Warren Wilson’s campus. In order to determine this, the efns of all my experimental plants were cut off so nectar was no longer secreted, while the control plants remained unharmed. At least twice a week the plants were inspected, treatments were carried out and the number of ants on both the control and experimental plants were recorded. After a month of treatments a blind research assistant was used in order to judge the relative amount of herbivory received by each plant. After comparing the relative amount of herbaceous damage received by the control plants (n=23) and the experimental plants (n=23) I used the Mann-Whitney test to conclude that the presence of efns did not affect the amount of herbivory a plant received (p=0.0966). The amount of ants found on each of the control plants and the experimental plants were compared using the Mann-Whitney test to see if the presence of efns affected ant numbers. I found a significantly larger number of ants (p=0.0007) on the control plants then on the experimental plants. Finally, a linear regression test was performed to see if there was any correlation between the amount of ants found on each plant and the herbivory scale number that plant received. Because there was no correlation between the numbers of ants found on the plants and the herbivory scale ratings the plants received (p=0.2920), I concluded that there was not a symbiotic mutualism between V. sativa and the ants feeding at their extra floral nectaries on the Warren Wilson campus.
September 26th, 2005
Audrey Williamson
The Effects of Age and Gender on the Learning Ability of the Horse (Equus caballus)
Mentor: Dr. Robert Eckstein
Abstract: Horse training is a valuable industry within the United States and throughout the world. In order to capitalize on this industry it is important to understand the horse (Equus caballus) and it’s natural tendencies. Horses have evolved to thrive on the open plains and have dichromatic color vision. The objective of this study was to measure the effect of age and gender on the ability of a horse to successfully complete a learning task. The learning task was to discover and remember that the white bucket has accessible food. Each horse was led into a standard pen and allowed to choose between a black bucket and white bucket. The horse was positively reinforced for choosing the white bucket; the black bucket was a neutral stimulus. Neither gender (p value= 0.9695) nor age (r=0.0035) were found to have a significant effect on the learning ability of the horses tested. All horses were successful in completing the learning task, suggesting that horses of various ages and genders can be successful at learning through operant conditioning.
September 26, 2005
Josha McBee
Determination of Paw Preference in Raccoons (Procyon lotor)
Mentor: Dr. Robert Eckstein
Abstract: Raccoons (Procyon lotor) have sensitive, agile hands, which they use to handle prey, climb, and pry things open. Their highly developed tactile sense made them a good subject for a paw preference study. Paw preference is defined as the tendency to use one paw over the other. There have been many studies done on paw preference dating as far back as 1930. These studies have primarily investigated chimpanzees, rats, mice, cats, and dogs. Results of these studies have found paw preferences to exist in individuals of all these species. My objective was to determine whether individual raccoons show a paw preference in a food/toy-reaching task. This task required the raccoons to reach a paw through a small opening in a container in order to pull out food or a toy. My hypothesis was that the raccoons would use one paw more than the other when reaching into the container. I tested six raccoons at Genesis Wildlife Sanctuary on Beech Mountain N.C. I recorded which paw each raccoon used to reach into the container for a total of 100 reaches. I ran a chi square test for each raccoon individually. The resulting p-values for all six raccoons were less than .02. This supports the hypothesis that all six raccoons have a paw preference.
October 3, 2005
Clayton Wilburn
Analysis of Synthetic and Natural Estrogens in the Influent and Effluent of the Buncombe County Wastewater Treatment Plant
Mentor: Dr. John Brock
Abstract: In recent years researchers have become increasingly concerned with the presence of endocrine disrupting chemicals (EDCs) in the environment. The majority of the research on EDCs focuses on estrogen mimics, which primarily enter the environment from sewage treatment plant effluent. The natural estrogens estradiol and estrone and the synthetic estrogen ethynylestradiol are principally responsible for the estrogenic nature of wastewater and are known to cause feminization of male fish at low parts per trillion (ppt) concentrations. Therefore the objective of this study was to quantify the amount of estrone, estradiol, and ethynylestradiol in the influent and effluent of the Buncombe Co. wastewater treatment plant and determine the elimination of the analytes from the influent. Isotope-dilution gas chromatography/mass spectrometry (GC/MS) served as the method of analysis. The analytes were derivatized using BSTFA with 1% TMCS in pyridine. For the initial calibration curves a linear response over two orders of magnitude was obtained for estradiol and estrone but not for ethynylestradiol. The label ethynylestradiol was found to be contributing to the native signal after performing a full scan analysis. A linear response was obtained for ethynylestradiol when the quantitative ions were changed. The analytes were extracted from the wastewater samples using solid-phase extraction. In the preliminary analysis of wastewater samples, the target analyte peaks were inadequately separated from co-eluting peaks. The GC column was changed to a 60 m column, and the wastewater samples re-analyzed. The longer column achieved adequate separation of the target analyte peaks. Calibration curves were then constructed using the new column, and the limit of detection (LOD) for each analyte was determined. The instrument LOD for estradiol and ethynylestradiol was 31.9 ppb and 13.4 ppb, respectively. The method LOD for estradiol and ethynylestradiol was 6.38 ppt and 2.69 ppt, respectively. Estrone was eliminated from the analysis due to interference in the method. The concentration of estradiol in the influent and effluent was determined to be <6.38-<16.0 ppt, with the variation due to different sample size. The concentration of ethynylestradiol in the influent and effluent was determined to be <2.69-<6.70 ppt, with the variation due to different sample size. The obtained concentrations agree with the concentrations of estradiol and ethynylestradiol found in other studies. Further analysis of the wastewater is warranted, as the concentrations of estradiol and ethynylestradiol found in this study are in the hormones’ effective range for endocrine disruption in wildlife.
October 3, 2005
Emily Leghart
Relationships between body length and attributes of vocalization in Gray Tree Frogs.
Mentors: Dr. Robert Eckstein and Dr. Paul Bartels
Abstract: There are two species of gray tree frogs: Hyla versicolor commonly called the Gray Tree Frog, and Hyla chrysoscelis referred to as Cope’s Gray Tree Frog. Although Hyla versicolor and Hyla chrysoscelis are two separate species they used to be considered the same species because they are cryptic. In the field, the only way to differentiate the frogs is by vocalization. The call of Hyla chrysoscelis is a faster trill and has a higher pitch than that of Hyla versicolor. The collective range of the gray tree frogs’ is from Maine and southeastern Canada to Northern Florida and from the east coast to as far west as Central Texas. The most commonly heard frog calls are advertisement calls of males. This type of call is primarily used to attract females or to defend or gain mating territory. A female may be attracted to different aspects of a male’s call. The specific properties or combination of properties that actually attracts the female is species specific. The objective of my study was to determine whether the length of individual frogs played a role in the latency between its calls, number of trills per call, and frequency at peak intensity of trills. I also wanted to determine if the species of frogs from the population I sampled were Hyla chrysoscelis or Hyla versicolor. To do this I sampled gray tree frogs from the Warren Wilson Pig Pond over a three-week period this past summer during the frogs’ mating season. To collect my samples I located the frogs by sound then by sight. I would then record the individuals’ calls on a Marantz Portable Cassette Recorder using a Dan Gibson Parabolic Microphone. I assigned each frog a letter and recorded the temperature. I then caught the frogs by hand. I measured the frog from the snout to the caudal end. I released the frog as close to the collection site as possible and moved further along the pond to continue sampling. I collected between five and eight calls from 12 frogs to analyze for a relationship between the body length and vocal attributes. I recorded 13 additional calls from non-captured individuals to analyze for a relationship between the temperature and the trills per second. All 25 sets of calls were used in the species analysis. The calls were analyzed using the audio editing computer program Raven Lite from Cornell University. I analyzed the calls for the average number of trills per call, average latency between the calls, and average frequency at peak intensity. I used a linear regression analysis to determine if the length of the individuals affected the average latency between the calls, average number of trills per call, and average frequency at peak intensity of trills. I found no significant correlation between body length and any of the analyzed vocal attributes. I compared my sampled calls with previous studies, which compared the vocalizations of the two species of gray tree frogs. The results of this species analysis were inconclusive.
November 7, 2005
Bart Pfautz
The Effect of Leachate from Wood Shavings Produced in Trail Maintenance on Daphnia magna.
Mentor: Dr. Greg Ettl
Abstract: Wooden poles are often used in trail projects throughout the U.S. National Park system. In the Great Smoky Mountains National Park chromated copper arsenate treated poles, untreated black locust (Robinia pseudoacacia) poles, and untreated eastern hemlock (Tsuga canadensis) poles are the most commonly used. These poles are used because they are decay resistant. The most significant cause of decay in wood is fungi. Wood-destroying fungi can seriously reduce the service life of wood. Extractives are the compounds primarily responsible for a wood’s natural resistance to decay. They are toxic to decay organisms. Extractives are easily leached from wood with water. Woods that are not decay resistant are treated with preservatives that are toxic. Chromated copper arsenate has been the most widely used preservative for the past 60 years. Recently the EPA has banned the use of CCA-treated wood for residential purposes due to the toxicity of the CCA preservative components. Leachate from CCA-treated wood is toxic to a variety of marine organisms. Some studies have suggested that untreated wood is more toxic than treated wood. The proposed reason is that naturally occurring extractives are removed or altered in the CCA treatment process and are no longer toxic. The objective of my study was to determine the relative toxicities of the three woods used for trail projects in the Great Smoky Mountains National Park. A sample composed of saw chips from fifteen different poles was made for each treatment – CCA-treated wood, hemlock, and black locust. Leachates were produced using each composite sample. Daphnia magna mortality in each of the leachates was recorded over a two-day period. A contingency table was used to perform a chi-square test on the data. At one and a half hours of exposure significant mortality was seen in the CCA-treated wood leachate (p<0.05). After a day of exposure significant mortality was seen in all the treatments. In a CCA-treated wood leachate, produced from one-tenth the amount of wood used to produce a hemlock and locust leachate, significant mortality was reached earlier. CCA-treated wood leachate was more toxic to Daphnia magna than hemlock or locust leachates.
November 7, 2005
Amos Little
Herbicidal effects of Ailanthus altissima extracts on native and non-native invasive plants
Mentor: Dr. Michael Torres
Abstract: This country spends approximately 137 billion dollars a year in efforts to control introduced species, also known as exotics. Thirty-four billion of the 137 billion dollars is spent in efforts to control exotic plants. One reason that some exotics do so well in introduced environments is thought to be because some produce chemical compounds that inhibit the growth and germination of other plants; this is called allelopathy. One of the exotics on Warren Wilson College campus that has been a problem in that past is Ailanthus altissima (Tree-of-Heaven), which has been shown to produce allelopathic compounds. The objectives of this study were to determine if aqueous extracts of A. altissima inhibited the growth and/or germination of native and non-native invasive plants. Three species where used in this experiment: Robinia pseudoacacia, Celastrus orbiculatus, and Lespedeza bicolor. All three species were tested in two studies: a survival study and a germination study. Three treatments were applied to all three species in both treatments: a control of D.I. water, a low concentration (4g/L) and a high concentration (10g/L). The extracts where made from dry A. altissima root bark by soaking 10g or dry bark in 1 liter of D.I. water for 48 hours with occasional agitation; the low concentration was made by diluting the high concentration down to 4g/L. In the survival study there were 10 replicates per concentration per species; each replicate was an individual plant in a pot with soil media. 30 ml of the extracts (control, low or high) where applied three times at intervals of four days in between each application. At the end of the application period the plants were classified as dead or alive (if the plant had low vigor but leaves not completely bleached it was considered alive). The germination study had 3 replicates per concentration per species; each replicate was a petri dish with Whatman paper and 20 seeds in it as well as the respective concentration (control, low or high). The seeds were allowed 10 days to germinate and at the end of the 10 days the number of germinated seeds were counted. A Chi-Squared contingency table was used to analyze the survival study data and an ANOVA test was used to analyze the germination study data. The p values for the survival studies were: 0.0039 for L. bicolor, 0.014 for R. pseudoacacia, and 0.029 for C. orbiculatus. In all three cases there was a significant difference between the survival rate of the plants in the control group and those in the high concentration group. The p values for the germination studies were: 0.0001 for L. bicolor, 0.0002 for R. pseudoacacia, and 0.0014 for C. orbiculatus. There was a significant difference between the germination rates of the seeds in the control vs. low, control vs. high, and the low vs. high concentrations in all three species. The significance in each study shows that the Natural Resources Crew on Warren Wilson College campus could potentially use A. altissima extracts as an herbicide to control other exotics but further studies need to be concluded to be sure.
November 21, 2005
Tessa Branson
The Determination of Vitamin B-12 Deficiency on WWC campus in vegans vs. non-vegans.
Mentor: Dr. Victoria Collins
Abstract: Vitamin B-12 (Cobalamin) functions as a coenzyme for many important biochemical processes including the synthesis of DNA and red blood cells and the breakdown of amino and fatty acids. Vitamin B-12 is obtained primarily from animal proteins (ie, red meat, poultry, fish, eggs, and dairy). Plants and vegetables lack this vitamin unless they have been exposed to microorganisms. Vegans, due to the lack of animal proteins in their diet, are susceptible to a B-12 deficiency. B-12 deficiency has been documented in several populations worldwide and has serious health implications including: physical weakness, irritability, neurological depression, and dementia. The popularity of the vegan-vegetarian diet at WWC is a cause for concern, as B-12 deficiency is a potential campus health issue. The objectives of this experiment were to (a) develop a non-invasive method to monitor B-12 status and (b) to compare B-12 levels for vegetarians and omnivores at WWC. Cobalamin status can be measured indirectly from urinary levels of methylmalonic acid (MMA). If the level of vitamin B-12 in the body is adequate, MMA is converted to succinate, and then metabolized. If the level of vitamin B-12 is inadequate, MMA accumulates and is excreted in the urine. This study used GC/MS to identify and quantify urinary MMA. Gas chromatographic analysis of MMA requires the conversion of MMA to a volatile derivative. The derivatives are chromatographed and quantified by comparison to an internal standard, MMA-d3. Two different volatile derivatives, tri methyl silyl esters and methyl esters, were prepared. The chromatograms of the trimethylsilyl esters were not reproducible. Methyl ester derivatives could be quantified to a detection limit of about 3 micrograms/ml in solutions of pure MMA. Dried urine samples showed MMA concentrations below 3 micrograms/ml. Pretreatment methods for urine samples must be perfected before sample collection. After analysis procedures have been verified, vitamin B-12 status of Warren Wilson students can be assessed.
November 28, 2005
Alana Weintraub
Natural and Biological Control Methods of Reducing Mealybug Infestations in the Warren Wilson College Research Greenhouse.
Mentor: Dr. Amy Boyd.
Abstract: Mealybugs are common pests with immense economic importance, as they feed on the sap of agricultural crops, interior landscapes, and greenhouse plants worldwide. The female mealybugs have piercing mouthparts that enable them to suck sap and feed on a wide range of host plants. Removing sap causes a multitude of damage to the plant, and spreads pathogens and viruses from plant to plant. A waste-product is produced by the mealybug, called honeydew, that coats plants and serves as a medium for black fungal growth, which weakens and kills plants. Male and female mealybugs differ in appearance and life cycles. Several published treatments to control the infestation of citrus mealybugs include: rubbing alcohol, Malathion, Insecticidal Soap, pheromonal lures, and biological control agents. The published biological control agents used to control citrus mealybug infestations include: Cryptolaemus montrouzieri, Entomophtora fumosa, and Leptomastix abnormis. The objective of this study was to determine whether the biological method of introducing the predator ladybeetle Cryptolaemus montrouzieri or the current method of spraying M-pede insecticidal soap would better eradicate the citrus mealybug infestation from the Warren Wilson College Research Greenhouse. In this study, 24 Coleus plants were infested and placed within a large observation cage lined with special screening that prevented the immigration and emigration of larvae and biological control agents. The plants were separated into three treatment groups: control, insecticidal soap spray, and release of Cryptolaemus montrouzieri. The individual plants of the control and spray groups were placed within miniature cages, but the plants of the biological control group were not, which allowed the biological control agents freedom to fly about the large cage. Data collection involved taking initial and final counts of the populations of both adult and instar-staged citrus mealybugs for two months. The population changes of the three treatment groups for both adult and instar-staged mealybugs were subjected to ANOVA, the statistical analysis of variance. The p-value for the population change of adult citrus mealybugs during month one was 0.2038, which is considered not significant. The p-value for the population change of instar-staged citrus mealybugs during month one was 0.2326, which is considered not significant. The p-value for the population change of adult citrus mealybugs during month two was 0.0912, which is considered not quite significant at the 0.05-level. The p-value for the population change of instar-staged citrus mealybugs during month two was 0.0012, which is considered very significant. The only data subjected to Tukey-Kramer Multiple Comparison tests were the population changes of the instar-staged citrus mealybugs during month two, because it was the only significantly different data group. The Cryptolaemus montrouzieri and the insecticidal soap treatment groups differed significantly, with p<0.01; C. montrouzieri and the control treatment groups did not differ significantly, with p>0.05; and the insecticidal soap and the control groups differed significantly, with p<0.01. Potential sources of error and experimental errors may have contributed to the results of the C. montrouzieri treatment groups, in which the population changes of adult and instar-staged mealybugs decreased during the first month, yet the adult population of mealybugs increased during the second month. These errors include counting errors, as well as the technique error of the biological control agents escaping.
January 30, 2006
Paul Bailey
Bird Diversity on Warren Wilson College Campus
Mentor: Dr. Lou Weber
Abstract: In the fall of 2005 Warren Wilson College cut a 0.5 acre stand of white pine (Pinus strobus) on Christmas Tree Hill to plant native grasses and provide early successional bird species habitat. This followed a study in the spring of 2002 in which Fletcher compared winter bird diversity at North Lane and Pumphouse Stand. The North Lane site had a complete overstory removal in 2000 by Warren Wilson College because of a southern pine beetle (Dendroctonus frontalis) outbreak. The Pumphouse Stand was a dominant white pine site with continuous canopy. Fletcher found the North Lane clearing to have a considerably higher number of bird species than Pumphouse Stand. More recently, the Pumphouse Stand has been thinned by Warren Wilson College to bring more light through the canopy to allow for propagation of other tree species. My objectives were to compare North Lane bird diversity to the Wildlife Plot and Pumphouse Stand, and also to Fletcher’s 2002 data. I also intend to suggest management implications for bird habitat on Warren Wilson College campus. I recorded bird diversity at each site from August 2005 through October 2005. I made five observations at each site during this period for a total of fifteen observation dates. Observations were done at dusk, during the evening chorus. I counted a total of seventeen bird species from three sites. Fifteen of the species were present in the Wildlife Plot, fourteen in North Lane, and nine at Pumphouse Stand. This suggests that bird populations on Warren Wilson College campus prefer small forest openings to pine stands. About 5.8 (0.98%) of 640 acres of Warren Wilson College forest is open canopy (10 years old). Only 0.5 acres were cut for the purpose of wildlife habitat. I suggest Warren Wilson College manages for more diverse forest stratification for wildlife habitat.
January 30, 2006
Katherine Kennedy
Serum Mineral Levels in Piglets on the Warren Wilson College Farm
Mentor: Dr. Jeff Holmes
Abstract: The objective of this study is to compare serum mineral concentrations and average daily weight gain of piglets raised on pasture to those of piglets raised in a barn. The litters of three sows were placed in each treatment group. Blood was drawn and weights were taken from each piglet at one, ten, and twenty eight days of age. The serum from each blood sample was analyzed by inductively coupled plasma optical emission spectrometry for iron, calcium, copper, magnesium, manganese, and zinc. Results were analyzed using unpaired t-tests with Welch corrections when necessary. Iron showed a not quite significant difference between treatment groups at one and ten days (p = 0.0858, p = 0.0772) and a significant difference at twenty-eight days (p = 0.0462), with outdoor piglets showing higher concentrations. Calcium showed a not quite significant difference at twenty-eight days (p = 0.0673). Copper showed a not quite significant difference at twenty-eight days (p = 0.0843). Average daily gain showed a significant difference between treatment groups (p = 0.0219). Calcium, copper, and average daily gain showed higher levels of minerals and greater weight gain in the indoor piglets than the outdoor piglets. All other minerals showed no significant difference between treatment groups (p > 0.05). The observed differences in iron levels are possibly due to soil access. The observed calcium, copper, and weight gain differences between treatment groups are possibly due to differences in access to sow feed, parasite pressure, or exposure to the elements. The results of this study do support the hypothesis that pasture raised piglets can gain iron from soil access. However, the results do not support the same hypothesis for all other minerals under consideration or for average daily weight gain.
February 13, 2006Raccoon in cage
Melissa Fellin
Object Handling Behavior in Captive Raccoons (Procyon lotor).
Mentor: Dr. Robert Eckstein
For centuries, naturalists have claimed that raccoons possess a high degree of cleanliness when handling and consuming their food. This notion of object handling behavior is reflected in the name raccoon, which means that they scratch with their hands. Because of this belief, many regard food handling as a necessary habit to be performed by the raccoons each time an object is grasped. My objective was to determine if the object handling behavior of the raccoons changed when they were tested as individuals and within a group. For my study I used thirteen raccoons that were each presented with eight different objects, where duration of object handling and number of dunks were recorded. When comparing the individual and group trials of all the raccoons, it was found that there was no significant difference (P=0.112) in the object handling times. There was a significant difference (P=0.001) in the handling time among objects ranging from ice cubes handled the longest (265.4 sec) to pinecones handled the least (69.6 sec). After comparing the individual and group handling times of the objects, there was no significant difference (P=0.207 and P=0.458) between when they were alone compared to when they were in groups. My conclusion is that whether raccoons are housed as individuals or in groups the types of objects placed within their environments are what wildlife centers should focus on.
(Photo by Melissa Fellin).
Feb. 13, 2006
Maryka Lier
Percent Cover and survival Rate of Warm Season Grasses
Mentor: Dr. Greg Ettl
Native warm season grasses (NWSG) provide a unique cover type to this area. Although rare on the landscape in the Southern Appalachian Mountain Region, they are found in forest gaps and grassy balds and contribute to the vegetation diversity. Once maintained through anthropogenic fire, the grassy gaps are disappearing. In the fall 2004, a 0.2 ha gap was cut in the White Pine forest of the Fortune property The Warren Wilson Wildlife Gap was created in an effort to restore native grassland and provide habitat for wildlife. The objective of this study was to determine the extent of the establishment of warm season grass within the Wildlife Gap over the first growing season. In spring 2005, the gap was sprayed with herbicide, burned, and 100 1.5 square-meter plots were set up. A mix of 24 grass plugs was planted within the plots and allowed to grow over the summer. For this study I randomly chose 35 plots to sample. Using a quadrat, I measured the percent cover of NWSG relative to percent cover of other woody and herbaceous vegetation. I also measured the survival rate of Broomsedge (Andropogon virginicus), Purpletop (Tridens flavus), Purple Love Grass (Eragrostis spectabilis), Little Bluestem (Schizachyrium scoparium), and Indiangrass (Sorghastrum nutans). Correlation analysis showed a negative relationship between NWSG cover and tree species cover (r=-0.31, p= 0.070), NWSG cover and shrub/herbaceous cover (r=-0.43, p=0.012), and NWSG cover and tree/shrub/herbaceous cover (r=-0.52, p=0.0015). A Kruskal-Wallis Nonparameteric test and a post-hoc multiple comparisons test, showed a difference in mean survival rate among species (p= 0.0025) with a significant difference between the survival rates of Purpletop and Broomsedge (p<0.01), and Purpletop and Purple Lovegrass (p<0.05). The negative correlation of NWSG relative to other vegetation indicates a need for weeding, burning, and herbiciding to cut back competition to NWSG. The survival rate of Purpletop was significantly different from other species but I believe that it should still be included in other restoration projects because of its aesthetic appeal, as well as to encourage diversity. Grass establishment within the first year was a success with 61% of the grass plugs surviving and 26% grass cover.
February 27, 2006
Julia York
The Antiviral Effects of Thirteen Botanical Essential Oils on Three Phages of E. coli
Mentor: Dr. Jeff Holmes
Abstract: Essential oils are volatile oils isolated from non-woody plant material that have been proposed to possess medicinal properties. Although the use of essential oils has gained popularity as an alternative healing modality, the practice is not widely accepted by the medical community. Little research has been conducted on the antiviral effects of essential oils. The objective of this study was to determine the in vitro antiviral effects of thirteen essential oils on three bacteriophages of Escherichia coli. Bacteriophages are similar to mammalian viruses, but are easier to cultivate and manipulate in laboratory experiments. Each essential oil was incubated with concentrated phage stock for twenty-four hours, and viral plaque formation was assessed using a plaque formation assay. Phage and bacteria were plated at two dilutions, ~10-3 and 10-5. Three aliquots of each dilution were plated. As a control, phages were separately treated with mineral oil and dilution buffer. The average plaque number per treatment was divided by the average plaque number per control to derive the percent plaque reduction. Six of thirteen oils (46%) exhibited a plaque reduction greater than 90% for both dilutions of T2 phage, seven of thirteen oils (54%) exhibited a plaque reduction greater than 90% for both dilutions of T4 phage, and three of thirteen oils (23%) exhibited a plaque reduction greater than 90% for both dilutions of fX174 phage. Eight oils (62%) inactivated at least one phage. Although the oils did not affect each phage equally, the active oils tended to inhibit multiple phages, suggesting a general, rather than phage-specific, mode of action. Essential oils can possess strong antiviral effects, suggesting they may have potential use in clinical practice.
February 27, 2006
Amanda J. Davis
Water quality assessment of the Swannanoa River using macroinvertebrates.
Mentor: Dr. Lou Weber
Abstract: The Swannanoa River flows through the Warren Wilson College campus and is a part of the French Broad River Watershed in western North Carolina. The North Carolina Division of Water Quality performed its most recent study on the Swannanoa in 2002, which indicated fair water quality. In 2003, a manufacturing plant near the river burned and in 2004 hurricanes Ivan and Frances caused severe flooding. Macroinvertebrates are used in water testing because they provide a rating for long-term quality based on tolerance to pollutants. I used the North Carolina Biotic Index (NCBI) to determine water quality. My objectives were to determine overall water quality of the Swannanoa, compare the current rating to past studies, examine community composition, and to determine if Warren Wilson College affects water quality. Kick net samples were taken three times in August-September 2005 from three campus locations. Specimens were preserved in ethanol and identified to the lowest possible taxon. The overall NCBI rating was 5.54, indicating good-fair water quality and an increase from the 2002 rating. There was no significant difference between ratings from upstream to downstream (p= 0.208), meaning Warren Wilson has no effect on water quality. The Shannon diversity index was used to determine species richness and evenness. The dominant species at nearly all sites were caddisflies, resulting in little species evenness. Water quality did not decline long-term after the fire and floods, showing the ability of a natural ecosystem to recover after damaging events.
March 6, 2006
Andrew Morin
Arsenic Levels in the Warren Wilson Alpine Tower and the Surrounding Substrate
Mentor: Dr. John Brock
Abstract: Arsenic has been used in wood preservatives for over 70 years (Stilwell 2005). It is effective pesticide but has negative health impacts on humans, including skin, liver and bladder cancer (EPA 1999). The Warren Wilson Alpine Tower was constructed in the summer of 2001 as a donation from Alpine Towers International. The wood preservative used to treat the lumber was chromated copper arsenate (CCA), which is standard in the pole industry (Zartman personal communication). Sampling methods were based on EPA protocols (2001) with modifications to determine the amount of leaching into the soil under the Alpine Tower. Soil and buffering material (mulch) samples were taken from below the midpoint of horizontal supports and 5 cm inside of vertical supports. Samples were taken from the top layer of substrate, between 6-8 cm deep, and between 14-16 cm deep. . Results showed arsenic levels ranging from 63.4 parts per billion (ppb) to 124.5 ppb for top-level samples, and from below detection limits to 87.9 ppb for 15cm deep soil samples. There was a significant variation between samples taken from the top layer and 14-16cm deep, with a p-value < 0.05. An excess of control samples were taken from a variety of sites around campus, and a random number generator were used to select control samples for analysis. All control samples had arsenic levels below the limits of detection. Wood samples were also taken from the tower itself, in locations directly above soil samples, and showed levels of arsenic to 573ppb. Though the soil samples showed levels of arsenic above most state cleanup levels, it is ill advised to attempt any mitigation besides the best management practices already in use by the Warren Wilson Outdoor Leadership Program. These practices include coating the tower with a sealant every one to two years, and advising participants to wash hands before eating. Further research is needed to clarify problems with the matrix modifier. Additional experiments are requirement to determine the amount of arsenic leached from the tower onto participants’ hands, as well as the amount absorbed through the skin.
March 6, 2006
Emilie Erich
Phytoextraction of Copper and Lead by Sagittaria graminea and Pontederia cordata in Beaver Lake Stormwater Wetland
Mentors: Dr. Mark Brenner and Dr. John Brock
Abstract: Constructed wetlands can improve water quality through several mechanisms including plant uptake. Beaver Lake Stormwater Wetland (BLSW), a constructed wetland currently receiving urban runoff from 60 acres of Asheville, NC, was designed to trap sediment and reduce nutrients and inorganic contaminates such as metals. Water quality data from the BLSW outflow consistently indicates the presence of Cu and Pb in the wetland outflow. The uptake and translocation of inorganic contaminates is known as phytoextraction, and results in the accumulation of inorganic contaminates in the shoots. It is a possibility that the vegetation present in BLSW may be phytoextracting Cu and Pb. The most predominant native wetland plant species are Pickerelweed (Pontederia cordata) and Grassleaf arrowhead (Sagittaria graminea). The objective of my study was to determine if the Grassleaf arrowhead and Pickerelweed in BLSW were phytoextracting Cu and Pb, and to compare the concentrations of Cu and Pb in Pickerelweed with the concentration of Cu and Pb in Grassleaf arrowhead. The samples from each species in the wetland were obtained through systematic sampling. Greenhouse raised individuals from each species were used as a control group for each species to show baseline levels of Cu and Pb in plants grown in a relatively contaminate-free environment. All samples were analyzed with graphite furnace atomic absorption spectrophotometry following standard analytical methods. External calibration curves were used to determine the unknown concentration of Cu and Pb in each sample, and the sample results were used to calculate the mean concentration of Cu and Pb in the dry leaf. The mean concentration of Cu in Pickerelweed leaf was 4.8 ppm in BLSW plants and 8.6 ppm in the controls. The mean concentration of Cu in Grassleaf arrowhead leaf was 5.7 ppm in BLSW plants and 63 ppm in the controls. The mean concentration of Pb in the Pickerelweed leaf was 0.014 ppm BLSW plants and 0.057 ppm in control plants. There was no significant difference between the mean concentration of Cu in Pickerelweed and Grassleaf arrowhead leaves from BLSW (p > 0.05). There was a significant difference in the mean concentration of Pb in Pickerelweed and Grassleaf arrowhead leaves from BLSW (p < 0.05). The control results showed unexpectedly high levels of each metal with considerable variability in each control group. These control results are likely due to contamination and a change in chemical environment. The results for the BLSW sample groups show that Pickerelweed and Grassleaf arrowhead phytoextracted Cu and Pb from BLSW, and that the concentration of lead in Grassleaf arrowhead leaves was significantly higher than that in the Grassleaf arrowhead leaves from BLSW. Though Pickerelweed and Grassleaf arrowhead phytoextracted both Cu and Pb from BLSW, the levels of metal accumulation were much lower than those of hyperaccumulators, indicating that neither species is likely to contribute substantially to the reduction of Cu and Pb in BLSW.
March 20, 2006
Kantesh Dodwani
RecyclingPolypropylene by Pyrolysis.
Mentor: Dr. Dean Kahl
Abstract: This research project was designed to determine whether polypropylene (plastic # 5) could be converted into useful chemical feedstock or fuel using relatively simple technolgy. In the United States, 23 million tons of plastics are disposed as waste each year. Approximately 14% of plastic wastes are recycled yearly. Plastics are recognized by numbers 1 to 7. Plastic number 1 is most recyclable and number 7 is least recyclable. The objectives of study were (a) to determine if polypropylene could pyrolyzed, (b) to determine the identity of pyrolysis products and (c) to determine if pyrolysis products could be used for fuel or chemical feedstock. Plastic # 5 (polypropylene) was collected from Warren Wilson recycling center. Using vacuum distillation, the polypropylene was pyrolyzed. Vacuum distillation removes air from the system to prevent combustion. The vacuum made it possible to collect the pyrolysis products. The experiments were done with catalyst (aluminum) and without catalyst. The distilled product was analyzed using several instrumental techniques: GC, IR, and NMR. The results suggested the product was mixture of 18 – 22 different compounds. These compounds were a mixture of alkanes and alkenes. A mass balance analysis shows that variable amounts of gases, liquids and solid residue were produced. The energy balance suggests that the energy required for pyrolysis is higher than the energy available in the distillate. However, the energy efficiency could be improved and conversion of the distillate to chemical feedstock could make the process economically viable.
March 20, 2006
Murugan Vinayagam
Dynamic Tunneling in a Quantum Mechanical System
Mentors: Dr. Donald F. Collins and Dr. Evan Wantland
Abstract: Quantum tunneling is a phenomenon of a particle existing in classically forbidden regions. Tunneling of quantum particles plays a major role in Scanning Tunneling Microscopy, Quantum Dots, Tunnel Diodes, and Very Large Scale Integrated Systems. Solutions to the Schrödinger equation are studied for a particle-in-a-box with a finite barrier in the center. All the solutions to the Schrödinger equation must be continuous and must satisfy the boundary conditions of zero value at the hard walls. These conditions lead to quantization. We chose to approximate the solutions numerically in order to simplify the process of finding solutions for various types of barriers. A search algorithm was programmed in MATLAB to approximate numerical solutions for the wave functions. The numerical solutions obtained from the search algorithm represent various stationary states of a wave function. Dynamic tunneling is shown by the time-dependence of the superposition of two stationary states. An animation is produced.
Megan Bryan
March 27, 2006
A Demographic Study of Kemps Ridley (Lepidochelys kempii) and Green (Chelonia mydas) Sea Turtle Strandings
Mentor: Dr. Lou Weber
Abstract: For many thousands of years sea turtles have made the ocean their home. The Iroquois nation credits a giant sea turtle with having brought the first humans to land and thus creating the world as we know it. In recent history, however, all species of sea turtles have experienced a drastic decline in population numbers, largely an effect of human-related activities. This study focuses on only two of the six species currently found in U.S. waters: the Kemps Ridley (Lepidochelys kempii) and Green sea (Chelonia mydas) turtles. Using stranding data taken from various South Carolina beaches from the period of 1980 to 2004, the objectives for this study are to (1) analyze the data for mortality patterns for each species in order to determine the sources of mortality present along the SC coast and (2) to detect population changes over time and use these changes to make a statement about the effectiveness of current conservation efforts. The stranding data for each species was provided by the South Carolina Department of Natural Resources and was collected by trained volunteers and SCDNR staff. Patterns in the data indicate that the population of Kemps Ridley turtles has been steadily increasing since 1980. In addition, the observed mortality for both species is significantly higher from 1993-2004 compared with the data collected from 1980-1992. The highest mortality rates occur during the months of April through August, with Greens between 27.0 and 38.9 cm curved carapace length (CCL) and Kemps Ridleys between 22.0 and 51.9 cm CCL being hit the hardest. Strandings of both species of turtles were most frequent on highly populated beaches in Charleston County during the peak recreation and shrimp-trawling season. The results of this study suggest that, although humans have a negative impact on sea turtle populations overall, the current conservation efforts are paying off and we are slowly seeing an increase in numbers. Worldwide research, education, and legislative action should continue to be taken in order to further protect all species of turtles.
March 27, 2006
Celia Barbieri
Diatomaceous Earth as a De-worming Treatment for Pigs on the Warren WilsonCollege Farm
Mentor: Dr. Jeff Holmes
Abstract: Diatomaceous Earth (DE) is a geological deposit consisting of the crushed skeletons of diatoms, which are unicellular organisms that form intricate skeletons of amorphous silica. Diatomaceous earth has been thought to have de-worming capabilities because it is a collection of microscopic shards of glass that mechanically pierce the protective coating of parasites. The Warren Wilson College Farm currently uses, Ivermectin (Ivomec), a conventional, chemical,
de-worming treatment. The objective of this study was t
o determine the efficacy of Diatomaceous Earth as an alternative to Ivomec for de-worming pigs.
Five litters of piglets were born in September 05- October 05, at weaning or roughly 28 days of age, I divided each litter by sex and weight, into an Ivomec treatment group and a non-Ivomec group. The entire Field was fed 2lb. DE/ ton feed from October till February and the 40lb. DE/ ton feed from February till March. I took weights and fecal samples rectally from each pig on five sample dates. I used the double centrifugation method to produce slides and count parasite eggs. I identified and counted roundworm (Ascaris suum), whipworm (Trichuris suis), Strongyloides (Oesophagstomum dentatum), and Coccidia (Isopora suis) eggs. A set of contingency tables and Fisher Exact tests were used in order to compare the parasite prevalence between treatment groups and sample dates. A range of p-values were found from 0.359-1.0, indicating no statistically significant differences for any of the comparisons made. I also used the weight measurements to compare the growth rates of the pigs treated with Ivomec and those that were not. The average weights at weaning and at the final sample date were compared using a t-test, and no significant difference was found between the means. These results conclude that there was no evidence that Ivomec made a difference in weight gain or intestinal parasite prevalence. Comparing the data collected before and after the increase in DE dose, does not support any dramatic effect on parasite levels or weight gains. In order to suggest the use of Diatomaceous earth as a de-worming treatment on the WWC farm, further research must be done.
April 3, 2006
Jesenia Mejias
Antioxidant Properties of Guava Psidium guajava L.
Mentor: Dr. Victoria Collins
Abstract: Antioxidants, a dietary requirement for humans, prevent and regulate free radicals formed during regular metabolic processes. Free radicals are reactive, unstable molecules with an unpaired electron, which can cause damage to cell membranes, proteins, and nucleic acids. Antioxidant capacity of a food can be estimated by its ability to reduce the stable free radical 1,1-diphenyl,2-picrylhydrazyl (DPPH*). Guava, a part of the traditional Hispanic diet, is a berry which can be eaten raw or processed and is found throughout the world in the Tropics. The objectives of the study are to determine the antioxidant capacity of guava fruits and juices using DPPH*, to compare four brands of bottled guava juices to each other and to fresh fruits, and to compare the effects of ripeness on the antioxidant capacity of guava fruits. Three bottles of four brands of guava juice and twenty-nine guavas were purchased at a South Florida market. Methanol extracts of guava fruits and juices were mixed with methanolic DPPH* solution and the reduction of the DPPH* radical was measured by decrease in absorbance at 520 nm. Absorbance decrease due to known amounts of vitamin C were used as reference for the free radical scavenging capacity of guava. Antioxidant capacities of the juice brands varied significantly from 9.0 to 45. mg vitamin C equivalent per 100 mL of juice. The guavas were significantly different from one ripeness category to another. The mean of the young and ripe guavas (38.07 mg vitamin C per ~100 g fruit) contained a significantly greater amount of vitamin C equivalent than the mean of all the juices (24.31 mg vitamin C per 100 mL juice) which are comparable serving sizes. The variability of the juice data could be due to the shelf life of juices and the varying fruit content among the brands. These data may help consumers make more informed choices on the types of fruits to buy for maximum vitamin C intake. An increase in dietary antioxidants may decrease cellular dysfunction caused by free radicals damage, therefore decreasing the risk of many health problems.
April 3, 2006
Richard Peart
Computational Modelling of Photochemical Smog
Mentor: Dr. Dean C. Kahl
Abstract: Smog is a system of air pollutants that interact with each other in the presence of sunlight, creating ground level ozone. Ground level ozone is harmful to plants and humans. Long term exposure to ozone has been shown to reduce pulmonary function and hinder plant reproduction and storage mechanisms. It is therefore important that the chemistry of smog formation be studied. Additionally, air pollution legislation is based on chemical and mathematical models that describe smog formation. Smog chambers are used to analyze smog under controlled weather free environments. Smog chambers are
8 m by 8 m by 8m Teflon lined rooms that may be used to simulate atmospheric conditions and simulate the formation of smog. Mathematical modelling is used to simulate the formation of photochemical smog. In this study radioactive decay and smog formation were modelled using Java and two numerical integration methods, the Euler method and the Runge-Kutta 4 method. The results of these numerical models were compared to the analytical solution (radioactive decay model) and actual smog chamber data (smog model). The results suggest that Runge-Kutta 4 is ideal for modelling radioactive decay, as the relative error between the analytical and Runge-Kutta 4 model was 0%. The results also suggest that there is no difference between the Runge Kutta 4 (RK4) and the Euler's method when used to model smog formation.
April 10, 2006
Brandon Schmandt
Evaluating Stormwater Management at the Wal-Mart Supercenter in Asheville
Mentor: Dr. Mark Brenner
North Carolina implemented the National Pollution Discharge Elimination System (NPDES) Phase II in 2004. The policy mandates regulation of stormwater discharge in the city of Asheville. NPDES Phase II is intended to prevent sedimentation of rivers and erosion by requiring developments to use Best Management Practices (BMP) designed to remove 85 percent of sediments before stormwater discharge. The Wal-Mart Supercenter development adjacent to the Swannanoa River was one of the first developments in Asheville to be subject to the NPDES Phase II regulations. BMP employed at the development include filter strips and retention basins. Stormwater discharge has been shown to increase total suspended solids (TSS) and Pb concentration in rivers (Deletic 2005; Gardner and Carey 2004). My first objective was to evaluate how effectively the stormwater management system at the Wal-Mart Supercenter prevents sediments and Pb from being discharged into the Swannanoa River. My second objective was to determine if Pb concentration relative to TSS was higher in the Swannanoa River or in a retention basin at the site of the Wal-Mart Supercenter. My methods for analysis of TSS and total Pb concnetration came from Standard Methods for the Examination of Water and Wastewater (APHA 1999). There was no significant difference between upstream and downstream samples for TSS and total Pb concentration. Four out of five samples showed significantly higher TSS solids in the Swannanoa River than in a retention basin, and significantly higher Pb concentration in a retention basin than in the Swannanoa River. The data indicate that stormwater from the Wal-Mart Supercenter does not significantly impact TSS and Pb concentration in the Swannanoa River. The data suggest that suspended solids in a retention basin have a higher lead concentration than suspended solids in the Swannanoa River. Future research concerning chemical pollutants in retention basins could indicate a need for regulation of stormwater pollutants other than sediments.
April 10, 2006
Stacey Hollis
Heavy Metals In Tern Prey
Mentor: Dr. John Brock
Abstract: Rising concentrations of heavy metal pollution can have detrimental impacts on marine ecosystems. Through biomagnification of across food chains and bioaccumulation within individual species, heavy metals may influence developmental abnormalities in seabirds. Common (Sterna hirundo) Terns are piscivorous seabirds that communally nest on inshore and offshore islands off the coast of the eastern United States. They are generalists, which lead to occurrences of fluctuations in diet across seasons based on prey availability. Since 2001, offspring of these terns have been observed with unexplained abnormalities in some inshore island-breeding colonies. Researchers are investigating the cause of these defects by analyzing chicks and eggs for heavy metals. Through Warren Wilson College, I analyzed the discarded prey species of these birds for four heavy metals, Pb, Zn, Cd, and Cr. My objective was to determine whether there is difference between metal concentrations in tern prey and their proximity to shore. Secondly, to determine whether there is a difference in metal concentrations of different species that make up a tern diet.
I collected prey items from an inshore and an offshore island tern colony off the coast of Maine, Seal Island National Wildlife Refuge and Pond Island NWR. I analyzed my samples using an inductively coupled plasma-optical emission spectrometer. In my sample groups, I found detectable levels of chromium, lead and zinc. No detectable levels of cadmium were observed in any of my samples. In my island comparisons, I found no significant difference in average heavy metal concentrations between Seal and Pond Island. In my species comparison, I found stickleback to have significantly higher concentrations of chromium and zinc than butterfish. My statistical results suggest that the metals I tested for might not contribute to the inshore phenomenon of chick abnormalities. Additionally, the difference in zinc concentrations found between stickleback and butterfish suggest that the fluctuations in the generalist diet of the common tern might have an influence on tern metal levels across years based on prey availability. In comparing metal levels in my samples to literature values, I found that, out of the concentrations I observed in my samples, lead met what are considered be elevated levels. In addition to ruling out the metals I tested for as cause for the tern chick abnormalities, I believe that my research was a good introductory study on metal levels in fish species and that additional research could be made in regard to these and other metals found in Maine’s coastal waters.
April 17, 2006
Tim Manney
The Effects of Cooking Time on the Strength of Pitch Glue made from Norway spruce (Picea abies) Oleoresin
Mentor: Dr. Mark Brenner
Abstract: This study explores the relationship between the strength of pitch glue made from Norway spruce (Picea abies) oleoresin and cooking time. Pitch glue was gathered from seven trees, each of which served as experimental replicates, during the fall of 2005. Each replicate was heated separately, equal amounts skimmed from the top, and mixed with half that volume of charcoal dust to produce loaded resin glue. Two samples of glue were taken from the mixture every fifteen minutes for two hours. The first was used to create a glue bond that was subsequently subjected to strength tests to estimate the strength of the glue at that cooking time. The second was used to measure the density of the glue at that time. A repeated measures ANOVA indicated that glue from the 15 minute time group was significantly stronger than glue from the 105 minute time group (p<0.05) and 120 minute time group (p<0.01). A linear regression analysis indicated a significant negative relationship between strength and cooking time (p=0.048) with r2=0.1382. The density of the 15 minute time group was significantly less than the 105 minute (p<0.0001) and 120 minute time groups (p<0.0001). These results suggest that pitch glue made from Norway spruce oleoresin can be overcooked and as a result weakened. The significantly lower density of pitch glue from the 15 minute time group suggests higher proportions of volatile compounds. The volatiles plasticize the resin and could be a contributing factor to the significant difference in strength means.
17 April 2006
Lily Doyle
Gender Selection Through Olfactory Cues
Mentor: Dr. Greg Ettl
Abstract: Many mammals communicate through pheromones, which influence the behavior or physiology of other organisms (Martins et al. 2005). Although most animals can communicate through pheromones, primates are thought to have limited or no capability of sensing them (Keverne et al. 2004). Whether or not humans can communicate through pheromones, heterosexual males and females may favor the scent of the opposite sex (Martins et al. 2005). The objective of this study is to test the attractiveness of heterosexual male and female underarm scents, commercial pheromones and boar scent on males and females. Underarm scents were collected from six students, a male and female from each objective scent categories; mild, moderate and strong scents. Male and female commercial pheromone attractants were tested as well as boar saliva. Each trial compared two scents and a control that were tested by 30 heterosexual males and 30 heterosexual females. Testers smelled and rated each sample on a visual analog scale and ANOVA was used to compare responses. Overall there was a negative or neutral response to human scents. Male and female testers rated moderate male scent, strong male and female scents, and commercial pheromones significantly lower than controls. Male commercial pheromones were rated significantly lower than mild male scent and the control, with p-values less than 0.01. There was no significant difference between male and boar scent rating by male and female testers.
April 24, 2006
Casey C. Gish
Isolating Wild Strains of Brewers Yeast (Saccharomyces cerevisiae) for Eventual Comparison of Flavor Compounds, and Tastes, of Fermentation Product
Mentor: Dr. Michael Torres
AbstractSaccharomyces cerevisiae, a species of budding yeast, belonging to the fungi kingdom, is a single celled eukaryotic organism whose size can range from 5-10 micrometers in diameter. It is thought that yeast is the oldest organism cultivated by humans used for its ability to leaven bread and produce alcohol, known as fermentation. Early brewers of beer relied on wild yeasts and bacteria for fermentation. Beers fermented by wild microorganisms are known as spontaneously fermented. Using wild microorganisms as opposed to lab-cultured yeast produced unique regional flavored beers. The objective of this study was to culture two wild strains of Saccharomyces cerevisiae, from two different regions, for eventual comparison of dominant flavor compounds produced during fermentation.
Two wild strains of S. cerevisiae were cultured. One from Western North Carolina located in a mountainous temperate region, at an elevation of approximately 2,000 feet with average precipitation of 54 inches. The second strain was isolated from Southern Maryland located at sea level on the Chesapeake Bay with average precipitation of 41 inches. Starting with a simple glucose media left open to the air of these regions, various other tests were applied to the isolated microorganisms, resulting in one unknown from each testing positive as Saccharomyces cerevisiae. While the two strains isolated were unable to ferment low gravity wort, this study verified reliable and inexpensive methods for culturing wild strains of S. cerevisiae. Future work with these yeast strains will potentially result in viable brewing yeast capable of producing a uniquely flavored beer.
April 24th, 2006
Kim Hall
The Flirtatious Behaviors Between Single Heterosexuals
Mentor: Dr. Vicki Garlock
Abstract: Flirtation has been around since the beginning of time; however, there haven’t been many studies that deal with the actual behavior of flirtation in the field of psychology. Studies have been conducted using video interviews, voice recording, and open-ended questionnaires about relationship satisfaction. The objectives of this study were to develop a questionnaire to assess flirtatious behavior in men and women. I also wanted to determine whether or not there was any difference between the flirtatious behaviors of men and women and to look at what flirtatious behaviors might occur in combination with each other. A survey was created and distributed. The data were then analyzed with T-tests, one-way ANOVAs and a Principle Components Analysis (PCA). PCA is a type of factor analysis that groups similar items into different factors. Statistically significant gender differences were found on the Self-Flirt survey and on the Friend Flirt survey. No statistical significance was found between men and women when looking at the Flirting Thoughts survey. One of the ANOVAs indicated that when Friend Flirt scores were analyzed according to self-assessed measures of attractiveness, the results were significant. When scores on the other two surveys were analyzed based on responses to the attractiveness questions, no other statistically significant differences were found. The PCA confirmed that certain flirtatious behaviors do lie together.
May 1, 2006
Colleen Blaine
The morphology of Spotted Salamanders (Ambystoma maculatum) in the presence of two different caged predators.
Mentor: Dr. Lou Weber
Abstract: Predator and prey interactions can influence the life history of a species. Aquatic species may be able to sense the chemical presence of a predator in the water through kairomones, a chemical signal between species. This can be tested in the laboratory by using a mesocosm with a caged predator. The objective of this study was to conduct a laboratory experiment in 2005 to determine whether Spotted Salamander larvae growth is affected by the presence of caged Red-Spotted Newts and caged Green Darner Dragonfly larvae. A natural pond survey at Warren Wilson College in Asheville, NC was also conducted in 2005 and 2006 to determine if the predators co-exist with Spotted Salamanders. Sixty salamanders spent thirty days in mesocosms with one of three treatments; Caged Green Darner Dragonfly larvae, Caged Red-Spotted Newt, or an empty cage (no predator). The resulting larval lengths were analyzed using an ANOVA. Spotted Salamander larvae raised in the presence of Red-Spotted Newts were significantly smaller (p= 0.025) in mean total length than salamander larvae raised in the presence of the Green Darner Dragonfly larvae. The Spotted Salamander larvae raised with Red-Spotted Newts were significantly smaller (p= 0.0061) in mean tail width from the salamander larvae raised in the presence of both the Green Darner Dragonfly larvae and with no predator. The Spotted Salamander larvae raised with the Red-Spotted Newt mean head width was smaller (p= 0.058) than the other two treatments, but not significant. The Spotted Salamander larvae raised with the Red-Spotted Newt mean tail length had no significant difference (p= 0.12)between the treatments. The growth of spotted salamanders appears to be influenced by the presence of the Red-Spotted Newt.
May 1, 2006
Lucas Blass
Archaic and Modern Approaches to Case Hardening Mild Steel
Mentor: Dr. Victoria Collins
Abstract: The process of case hardening imparts a hard outer layer to steel, while maintaining flexibility in the softer inner core of the metal.
This study investigated differences in hardness produced by between different methods of case hardening steel (the Rockwell hardness scale is a standard measure of steel hardness, and was used in this study). Pieces of low carbon steel were subjected to five different treatments: charcoal, bone meal, industrial compound (Ecco carb), quenched and untreated, and unquenched and untreated control. All samples except for the control were encased in sections of pipe and heated at 1650° F (900° C) for 8 hours, then quenched in oil. Samples were tested for control on the Rockwell scale. Mean hardness for the three replications of each treatment were compared using an ANOVA. Mean hardness is shown below, with standard error in the bottom row.
HRC Values for Five Treatments
Bone Meal
Ecco Carb
Oil Quench
Standard Error
The industrial compound was found to yield the most significant increase in hardness, although the bone meal and charcoal treatment groups also produced significant hardness gains, compared to the control. The type of hardness gain produced by the Ecco Carb treatment is suitable for nearly any case hardening treatment, including but not limited to gun actions, crankshafts, and gears. The hardness gains produced by the charcoal and bone meal treatments were near the values needed for applications such as pry bars and other non-cutting tools. Although hardness gain was more substantial in the industrial Ecco Carb treatment, it contains barium carbonate, a chemical which is classified as hazardous waste. This may mean that Ecco Carb would be an inappropriate compound for a home shop application.
May 8, 2006
Hannah L. Barks
The effectiveness of the DSI Pro camera for the determination of the relative ages of different star clusters
Mentor: Dr. Donald F. Collins
Abstract: A star cluster is a group of stars that are approximately the same age, but are not all the same size or mass. In a young star cluster, the stars lie on the main sequence of the luminosity versus color index diagram with the massive, hot, blue stars being brightest and the less massive, cooler, red stars being the faintest. In an old star cluster, the hot massive stars have evolved off the main sequence and have migrated into the high luminosity, red region of the color index diagram. The objective of this experiment was to determine if the inexpensive Meade Deep Sky Imager Pro (DSI-Pro) camera coupled with a 20-cm Schmidt Cassegrain telescope could be used for the determination of the relative ages of different star clusters. Four separate Meade filters: infrared (IR) block and the band-passes for red, green and blue were used in this experiment. The band-pass of each filter was obtained using a spectrophotometer. The transmission spectra showed that each filter transmitted IR light along with its designated color, which diluted the color index. Aperture photometry was used to measure the intensity of each star for each color band. The green luminosity and color index were then calculated and plotted on a color index diagram (similar to a Hertzsprung-Russell diagram). With corrections for the dilution of the color index by IR light, different star clusters were successfully observed at different stages of evolution. |
96025aa413d7046b | Double-donor complex in vertically coupled quantum dots in a threading magnetic field
• Ramón Manjarres-García1,
Affiliated with
• Gene Elizabeth Escorcia-Salas1,
Affiliated with
• Javier Manjarres-Torres1,
Affiliated with
• Ilia D Mikhailov2 and
Affiliated with
• José Sierra-Ortega1Email author
Affiliated with
Nanoscale Research Letters20127:531
DOI: 10.1186/1556-276X-7-531
Received: 10 July 2012
Accepted: 29 August 2012
Published: 26 September 2012
We consider a model of hydrogen-like artificial molecule formed by two vertically coupled quantum dots in the shape of axially symmetrical thin layers with on-axis single donor impurity in each of them and with the magnetic field directed along the symmetry axis. We present numerical results for energies of some low-lying levels as functions of the magnetic field applied along the symmetry axis for different quantum dot heights, radii, and separations between them. The evolution of the Aharonov-Bohm oscillations of the energy levels with the increase of the separation between dots is analyzed.
Quantum dots Adiabatic approximation Artificial molecule PACS 78.67.-n 78.67.Hc 73.21.-b
An important feature in low-dimensional systems is the electron-electron interaction because it plays a crucial role in understanding the electrical transport properties of quantum dots (QDs) at low temperatures [1]. Such systems may involve small or large numbers of electrons as well as being confined in one or more dimensions. The number of electrons in a QD can be varied over a considerable range. It is possible to control the size and the number of electrons and to observe their spatial distributions in QDs. Energy spectrum of two-electron QD with a parabolic confinement, for which two-particle wave equation can be separated completely, has been analyzed previously by using different methods [25].
In the present work, we propose another exactly solvable two-electron heterostructure in which two separated electrons are confined in vertically coupled QDs with a special lens-like morphology. Together with two on-axis donors, these two electrons generate an artificial hydrogen-like molecule whose properties can be controlled by varying the geometric parameters and the strength of the magnetic field applied along the symmetry axis.
The model which we analyze below consists of two identical, axially symmetrical and vertically coupled QDs with the on-axis donor located in each one of them (see Figure 1). The dimension of the heterostructure is defined by the QDs' radii R, height W, and the separation d between them along the z-axis. We assume that the QDs have a shape of very thin layers whose profiles are given by the following dependence of the thickness of the layers w on the distance ρ from the axis:
w ρ = W / 1 + ρ / R 2
Figure 1
Scheme of the artificial hydrogen-like molecule.
Besides, for the sake of simplicity, we consider a model with infinite barrier confinement, which is defined in cylindrical coordinates as V r = 0 if 0 < z < w ρ, and V r = otherwise.
Given that the thicknesses of the layers are much smaller than their lateral dimensions, one can take advantage of the adiabatic approximation in order to exclude from consideration the rapid particle motions along the z-axis [6, 7] and obtain the following expression for effective Hamiltonian in polar coordinates:
H = i = 1 , 2 H 0 ρ i + V ρ 1 , ρ 2 + 2 π 2 W 2 ; H 0 ρ i = Δ i 2 D + i γ ϑ i + ω 2 ρ i 2 4 ; ω 2 = 2 π / W R 2 + γ 2 V ρ 1 , ρ 2 = 2 d 2 + ρ 1 ρ 2 2 i = 1 , 2 2 d 2 + ρ i 2 + 2 ρ i
The effective Bohr radius a 0 = 2 ϵ / m * e 2 as the unit of length, the effective Rydberg R y * = e 2 / 2 ϵ a 0 = 2 / 2 m * a 0 2 as the energy unit, and γ = e B / 2 m * c R y * as the unit of the magnetic field strength have been used in Hamiltonian (Equation 2), with m * being the electron effective mass and ϵ, the dielectric constant. The polar coordinates ρ k = ρ k , ϑ k labeled by k = 1 , 2 correspond to the first and the second electrons, respectively. It is seen that for the selected particular profile given by Equation 1, the Hamiltonian (Equation 2) coincides with one which describes two particles in 2D quantum dot with parabolic confinement and renormalized interaction. It is well known that such Hamiltonian may be separated by using the center of mass, R = ρ 1 + ρ 2 / 2, and the relative, ρ = ρ 1 ρ 2 coordinates [8]:
H = H R + 2 H ρ ; H R = Δ R 2 D 2 + 1 2 ω 2 R 2 ; H ρ = Δ ρ 2 D + ω 2 ρ 2 16 3 ρ 4 ρ 2 + 4 d 2
The wave function is factorized into two parts, ψ R , ρ = Φ R φ ρ, describing the center of mass and the relative motions, respectively. Meanwhile, the total energy splits into two terms depending on two radial N R , n ρ and two azimuthal L R , l ρ quantum numbers:
E N R , L R ; n ρ , l ρ = E R N R , L R + 2 E ρ n ρ , l ρ = 2 N R + L R ω + 2 E ρ n ρ , l ρ
where the first term represents the well-known expression for the exact energy levels of a two-dimensional harmonic oscillator, labeled by the radial N R = 0 , 1 , 2 , and azimuthal L R = 0 ± 1 ± 2 , quantum numbers for the center of mass motion and the relative motion energy 2 E ρ n ρ , l ρ must be found solving the following one-dimensional Schrödinger equation:
u ' ' ( ρ ) + V ( ρ ) u ( ρ ) = E ρ ( n ρ , l ρ ) u ( ρ ) ; V ( ρ ) = ω 2 ρ 2 / 4 + l ρ 2 1 / 4 / ρ 2 3 / ρ 4 / ρ 2 + 4 d 2
In our numerical, work the trigonometric sweep method [8] is used to solve this equation.
Results and discussion
Before the results are shown and discussed, it is useful to specify the labeling of quantum levels of the two-electron molecular complex. According to Equation 4, the energy levels E N R , L R ; n ρ , l ρ can be labeled by four symbols N R , L R ; n ρ , l ρ Even and odd l p correspond to the spin singlet and triplet states, respectively, consistent with the Pauli Exclusion Principle.
We have performed numerical calculations of energy levels of complexes with radii R between 20 and 100 nm for different separations between layers. In all presented calculation results, the top thickness W is taken as 0.4 nm. In order to highlight the role of the interplay between the quantum size and correlation effects in the formation of the energy spectrum of our artificial system different from the natural hydrogen molecular complex, we have plotted in Figure 2 the potential curves E ˜ d = E N R , L R ; n ρ , l ρ + 2 / d, similar to those of the hydrogen molecule in which the complex energies with the electrostatic repulsion between donors included as functions of the separation d between QDs are shown. Comparing them with the corresponding potential curves of the hydrogen molecule, one can to take into account that in analyzing the structure here, the electron motion in contrast to hydrogen molecule is restricted inside two separated thin layers. The energy dependencies of different levels ( labeled by four quantum numbers, N R , L R ; n ρ , l ρ are shown in Figure 2 for QDs with two different radii, R = 40 nm and R = 100 nm. A clear difference in the behavior of the potential curves is readily seen. If the curves are smooth without any crossovers for QDs of small radius, the corresponding potential curves suffer a drastic change as the QD radius becomes large. In the last case, the energy levels become very sensitive to the variation of the separation between QDs, and the quantum size effect becomes essential, providing alteration of the energy gaps, multiple crossovers of levels with the same or different spins, and the level reordering, as the distance between QDs increases from 5 to 20 nm.
Figure 2
Energies E ˜ d the double-donor complex corresponding to some low-lying levels in vertically coupled QDs. As functions of the distance between them.
We ascribe a dramatic alteration of the potential curves with the increase of the separation between QDs from 5 to 20 nm observed in Figure 2 to the interplay between the structural confinement and the electron-electron repulsion. As the QDs' radii are small(R → 0), the confinement is strong, and the kinetic energy (~1/R2) is larger than the electron-electron repulsion energy (~1/R), vice versa for QDs with large radii. Therefore, as the QDs' radii increase, the arrangement of the electronic structure for different energy levels changes from typical for the gas-like system to crystal-like one, accompanied by the crossovers of the curves and reordering of the levels. As the two-electron structure arrangement for large separation between electrons becomes almost rigid, the relative motion of electrons is frozen out, and the two-electron structure transforms into a rigid rotator with practically fixed separation between electrons. The electrons' motion in this case becomes similar to one in 1D ring, and therefore, the energy dependencies on the external magnetic field applied along the symmetry axis should be similar to those which exhibit the Aharonov-Bohm effect.
In order to verify this hypothesis, we present in Figure 3 the calculated molecular complex energies E N R , L R ; n ρ , l ρ of some lower levels as functions of the magnetic field strength for QDs with small R = 40 nm (upper curves) and large R = 100 nm radii (lower curves).
Figure 3
Energies E N R , L R ; n ρ , l ρ some low-lying levels of the double-donor complex in vertically coupled QDs. As functions of the magnetic field.
It is seen that for QD of small radius, the energies are increased smoothly with very few intersections. Such dependence is typical for gas-like systems where the paramagnetic term contribution is depreciable in comparison with the diamagnetic one. On the contrary, the energy dependency curves for QD of large radius present multiple crossovers and level-ordering inversion as the magnetic field strength increases from 0 to 1. It is due to a competition between diamagnetic (positive) and paramagnetic (negative) terms of the Hamiltonian whose contributions in total two-electron energy in QDs of large radii are of the same order while the electron arrangement is similar to a rigid rotator. In other words, the correlation in this case becomes as strong as the electrons are mainly located on the opposite sides within a narrow ring-like region.
Finally, in Figures 4 and 5, we present results of the calculation of the density of electronic states for double-donor molecular complex confined in vertically coupled QDs. It is clear from the discussion above that the presence of the magnetic field should provide a significant change of the density of the electronic states as the QDs' radii are sufficiently large. Indeed, it is seen from Figure 4 that under relatively weak magnetic field (γ = 0.5), as the molecular complex is confined in QDs of 100-nm with 6-nm separation between them, the density of states becomes essentially more homogeneous since the widths of individual lines are broadened and the gaps between them are reduced. Such change of the density of states is observed due to a splitting and displacement of the individual lines accompanied by their crossovers and the reordering of the energy levels.
Figure 4
Density of states for two different values of the magnetic field. Corresponding to low-lying levels of the double-donor complex in vertically coupled QDs.
Figure 5
Density of states for three different distances between layers. Corresponding to low-lying levels of the double-donor complex in vertically coupled QDs.
In Figure 5, we present similar curves of the molecular complex density of states for three different separations between QDs. It is seen that the curves of the density of states are modified only slightly, essentially less than under variation of the magnetic field. Particularly, the lower energy peak positions are almost insensitive to any change of the distance between dots, while the upper energy peaks are noticeably displaced toward higher energy regions.
In short, we propose a simple numerical procedure for calculating the energies and wave functions of a molecular complex formed by two separated on-axis donors located at vertically coupled quantum dots with a particular lens-type morphology which produces in-plane parabolic confinement. We show that in the adiabatic approximation, the Hamiltonian of this two-electron system included in the presence of the external magnetic field is separable. The curves of the energy dependencies on the external magnetic field and the separation between quantum dots are presented. Analyzing the curves of the low-lying energies as functions of the magnetic field applied along the symmetry axis, we find that the two-electron configuration evolves from one similar to a rigid rotator to gas-like as the dot radii decrease. This quantum size effect is accompanied by a significant modification of the density of the energy states and the energy dependencies on the external magnetic field and geometric parameters of the structure.
This work was financed by the Universidad del Magdalena through the Vicerrectoría de Investigaciones (Código 01).
Authors’ Affiliations
Group of Investigation in Condensed Matter Theory, Universidad del Magdalena
Universidad Industrial de Santander
1. Kramer B: Proceedings of a NATO Advanced Study Institute on Quantum Coherence in Mesoscopic System: 1990 April 2–13; Les Arcs, France. New York: Plenum; 1991.View Article
2. Maksym PA, Chakraborty T: Quantum dots in a magnetic field: role of electron–electron interactions. Phys Rev Lett 1990, 65: 108–111. 10.1103/PhysRevLett.65.108View Article
3. Pfannkuche D, Gudmundsson V, Maksym P: Comparison of a Hartree, a Hartree-Fock, and an exact treatment of quantum-dot helium. Phys Rev B 1993, 47: 2244–2250. 10.1103/PhysRevB.47.2244View Article
4. Zhu JL, Yu JZ, Li ZQ, Kawasoe Y: Exact solutions of two electrons in a quantum dot. J Phys Condens Matter 1996, 8: 7857. 10.1088/0953-8984/8/42/005View Article
5. Mikhailov ID, Betancur FJ: Energy spectra of two particles in a parabolic quantum dot: numerical sweep method. Phys stat sol (b) 1999, 213: 325–332. 10.1002/(SICI)1521-3951(199906)213:2<325::AID-PSSB325>3.0.CO;2-WView Article
6. Peeters FM, Schweigert VA: Two-electron quantum disks. Phys Rev B 1996, 53: 1468–1474. 10.1103/PhysRevB.53.1468View Article
7. Mikhailov ID, Marín JH, García F: Off-axis donors in quasi-two-dimensional quantum dots with cylindrical symmetry. Phys stat sol (b) 2005, 242(8):1636–1649. 10.1002/pssb.200540053View Article
8. Betancur FJ, Mikhailov ID, Oliveira LE: Shallow donor states in GaAs-(Ga, Al)As quantum dots with different potential shapes. J Appl Phys D 1998, 31: 3391. 10.1088/0022-3727/31/23/013View Article
© Manjarres-García et al.; licensee Springer. 2012
|
114a1f8dc6f14e1e | The current battle for America is, as Angelo Codevilla has recently emphasized in his seminal essay, a war between the majority of Americans and America’s ruling class. This conflict is a reflection of a battle between the two greatest scientists of the past two centuries, Charles Darwin and Albert Einstein. Einstein famously claimed that “God does not play dice with the universe,” whereas Darwin claimed that God does, indeed, play dice with the universe. Codevilla pointed out the self-image of the ruling class rests on its belief that humans are the unforeseen outcome of chance mutations acted upon by natural selection. Not so. God decreed the evolution of humans before time began. The ruling class stands with Darwin. We stand with Einstein.
In his 1859 book The Origin of Species, Darwin wrote that evolution by natural selection was completely consistent with determinism. However, by 1868, Darwin had realized that his theory of evolution required a fundamental indeterminism at the microscopic level. From the last chapter of his Variation of Animals and Plants Under Domestication:
[If] we assume that each particular variation was from the beginning of all time preordained … natural selection or survival of the fittest, must appear to us superfluous laws of nature.
Darwin’s followers knew perfectly well his theory was a challenge to determinism. Woodrow Wilson said in a speech made just before he became president:
[The] Constitution of the United States had been made under the dominion of the Newtonian Theory. … The makers of our Federal Constitution … constructed a government … to display the laws of nature. Politics in their thought was a variety of mechanics. … The government was to exist and move by virtue of the efficacy of “checks and balances.” … The trouble with the theory is that government is not a machine, but a living thing. It falls, not under the theory of the universe, but under the theory of organic life. It is accountable to Darwin, not to Newton. … Society is a living organism and must obey the laws of life, not of mechanics … a nation is a living thing and not a machine.
This is nonsense. Everything is a machine. Atoms, molecules, living organisms, planets, stars, galaxies, and the entire universe are machines, all subject to the same laws of mechanics. It is often believed that the development of quantum mechanics undermined determinism. One of the familiar facts of quantum theory is the Heisenberg Uncertainty Principle, and it is generally believed that this Principle establishes that God does indeed play dice.
Not true. The great physicist Max Planck pointed out long ago that the Schrödinger equation, the fundamental equation of quantum mechanics from which the Uncertainty Principle is mathematically derived, is even more deterministic than the equations of Newton that so annoyed Wilson. The limit of prediction given by Uncertainty Principle has been known for decades to be due to interference from universes that are parallel to ours, not from God playing dice. The existence of these other universes is a necessary mathematical consequence of the Schrödinger equation itself, or more generally, of Newton’s own mechanics in its most general form. |
87baab1bc420d95b | Wave packet
From Wikipedia, the free encyclopedia
Jump to: navigation, search
"Wave train" redirects here. For the mathematics concept, see Periodic travelling wave.
A wave packet without dispersion (real- or imaginary part)
A wave packet with dispersion
In physics, a wave packet (or wave train) is a short "burst" or "envelope" of localized wave action that travels as a unit. A wave packet can be analyzed into, or can be synthesized from, an infinite set of component sinusoidal waves of different wavenumbers, with phases and amplitudes such that they interfere constructively only over a small region of space, and destructively elsewhere.[1] Each component wave function, and hence the wave packet, are solutions of a wave equation. Depending on the wave equation, the wave packet's profile may remain constant (no dispersion, see figure) or it may change (dispersion) while propagating.
Quantum mechanics ascribes a special significance to the wave packet; it is interpreted as a probability amplitude, its norm squared describing the probability density that a particle or particles in a particular state will be measured to have a given position or momentum. The wave equation is in this case the Schrödinger equation. It is possible to deduce the time evolution of a quantum mechanical system, similar to the process of the Hamiltonian formalism in classical mechanics. The dispersive character of solutions of the Schrödinger equation has played an important role in rejecting Schrödinger's original interpretation, and accepting the Born rule.
In the coordinate representation of the wave (such as the Cartesian coordinate system), the position of the physical object's localized probability is specified by the position of the packet solution. Moreover, the narrower the spatial wave packet, and therefore the better localized the position of the wave packet, the larger the spread in the momentum of the wave. This trade-off between spread in position and spread in momentum is a characteristic feature of the Heisenberg uncertainty principle, and will be illustrated below.
Historical background[edit]
In the early 1900s, it became apparent that classical mechanics had some major failings. Isaac Newton originally proposed the idea that light came in discrete packets, which he called corpuscles, but the wave-like behavior of many light phenomena quickly led scientists to favor a wave description of electromagnetism. It wasn't until the 1930s that the particle nature of light really began to be widely accepted in physics. The development of quantum mechanics — and its success at explaining confusing experimental results — was at the root of this acceptance. Thus, one of the basic concepts in the formulation of quantum mechanics is that of light coming in discrete bundles called photons. The energy of light photon is a function of its frequency,
E = h\nu.[2]
The photon's energy is equal to Planck's constant, h, multiplied by its frequency, ν. This resolved a problem in classical physics, called the ultraviolet catastrophe.
The ideas of quantum mechanics continued to be developed throughout the 20th century. The picture that was developed was of a particulate world, with all phenomena and matter made of and interacting with discrete particles; however, these particles were described by a probability wave. The interactions, locations, and all of physics would be reduced to the calculations of these probability amplitudes. The particle-like nature of the world has been confirmed by experiment over a century, while the wave-like phenomena could be characterized as consequences of the wave packet aspect of quantum particles, see wave-particle duality. According to the principle of complementarity, the wave-like and particle-like characteristics never manifest themselves at the same time, i.e. in the same experiment — see however the Afshar experiment and the lively discussion around it.
Basic behaviors of wave packets[edit]
Position space probability density of an initially Gaussian state moving in one dimension at minimally uncertain, constant momentum in free space.
Position space probability density of an initially Gaussian state trapped in an infinite potential well experiencing periodic Quantum Tunneling in a centered potential wall.
As an example of propagation without dispersion, consider wave solutions to the following wave equation,
{ \partial^2 u \over \partial t^2 } = c^2 { \nabla^2 u},
where c is the speed of the wave's propagation in a given medium.
Using the physics time convention, exp(−iωt), the wave equation has plane-wave solutions
u(\bold{x},t) = e^{i{(\bold{k\cdot x}}-\omega t)},
\omega^2 =|\bold{k}|^2 c^2, and |\bold{k}|^2 = k_x^2 + k_y^2+ k_z^2.
This relation between ω and k should be valid so that the plane wave is a solution to the wave equation. It is called a dispersion relation.
To simplify, consider only waves propagating in one dimension (extension to three dimensions is straightforward). Then the general solution is
u(x,t)= A e^{i(kx-\omega t)} + B e^{-i(kx+\omega t)},
in which we may take ω = kc. The first term represents a wave propagating in the positive x-direction since it is a function of x − ct only; the second term, being a function of x + ct, represents a wave propagating in the negative x-direction.
A wave packet is a localized disturbance that results from the sum of many different wave forms. If the packet is strongly localized, more frequencies are needed to allow the constructive superposition in the region of localization and destructive superposition outside the region. From the basic solutions in one dimension, a general form of a wave packet can be expressed as
u(x,t) = \frac{1}{\sqrt{2\pi}} \int^{\,\infty}_{-\infty} A(k) ~ e^{i(kx-\omega(k)t)}dk.
As in the plane-wave case the wave packet travels to the right for ω(k) = kc, since u(x, t)= F(x − ct), and to the left for ω(k) = −kc, since u(x,t) = F(x + ct).
The factor 12 comes from Fourier transform conventions. The amplitude A(k) contains the coefficients of the linear superposition of the plane-wave solutions. These coefficients can in turn be expressed as a function of u(x, t) evaluated at t = 0 by inverting the Fourier transform relation above:
A(k) = \frac{1}{\sqrt{2\pi}} \int^{\,\infty}_{-\infty} u(x,0) ~ e^{-ikx}dx.
For instance, choosing
u(x,0) = e^{-x^2 +ik_0x},
we obtain
A(k) = \frac{1}{\sqrt{2}} e^{-\frac{(k-k_0)^2}{4}},
and finally
u(x,t) = e^{-(x-ct)^2 +ik_0(x-ct)} = e^{-(x-ct)^2} \left[\cos\left(2\pi \frac{x-ct}{\lambda}\right) + i\sin\left(2\pi\frac{x-ct}{\lambda}\right)\right].
The imaginary part is a sine wave with perpendicular polarisation to the cosine wave. The nondispersive propagation of the real or imaginary part of this wave packet is presented in the above animation.
By contrast, as an example of propagation now with dispersion, consider instead solutions to the Schrödinger equation (with m and ħ set equal to one),
i{ \partial u \over \partial t } = -\frac{1}{2} { \nabla^2 u },
yielding the dispersion relation
\omega = \frac{1}{2}|\bold{k}|^2.
Once again, restricting attention to one dimension, the solution to the Schrödinger equation satisfying the initial condition u(x,0) = exp(−x²+ikox) is seen to be
u(x,t) = \frac{1}{\sqrt{1 + 2it}} e^{-\frac{1}{4}k_0^2} ~ e^{-\frac{1}{1 + 2it}\left(x - \frac{ik_0}{2}\right)^2}
= \frac{1}{\sqrt{1 + 2it}} e^{-\frac{1}{1 + 4t^2}(x - k_0t)^2}~ e^{i \frac{1}{1 + 4t^2}\left((k_0 + 2tx)x - \frac{1}{2}tk_0^2\right)} ~.
An impression of the dispersive behavior of this wave packet is obtained by looking at the probability density,
|u(x,t)|^2 = \frac{1}{\sqrt{1+4t^2}}~e^{-\frac{2(x-k_0t)^2}{1+4t^2}}~.
It is evident that this dispersive wave packet, while moving with constant group velocity ko, is delocalizing rapidly: it has a width increasing with time as 1 + 4t² → 2t, so eventually it diffuses to an unlimited region of space.[nb 1]
Gaussian wave packets in quantum mechanics[edit]
The above dispersive Gaussian wave packet, unnormalized and just centered at the origin, instead, at t=0, can now be written in 3D:,[3] [4]
\psi(\bold{r},0) = e^{-\bold{r}\cdot\bold{r}/ 2a},
where a is a positive real number, the square of the width of the wave packet, a = 2⟨r·r⟩/3⟨1⟩ = 2 (Δx)2.
The Fourier transform is also a Gaussian in terms of the wavenumber, t=0, the k-vector, (with inverse width, 1/a = 2⟨k·k⟩/3⟨1⟩ = 2 (Δpx)2, so that Δx Δpx/2, i.e. it saturates the uncertainty relation),
\psi(\bold{k},0) = (2\pi a)^{3/2} e^{- a \bold{k}\cdot\bold{k}/2}.
\begin{align}\Psi(\bold{k}, t) &= (2\pi a)^{3/2} e^{- a \bold{k}\cdot\bold{k}/2 }e^{-iEt/\hbar} \\
&= (2\pi a)^{3/2} e^{- a \bold{k}\cdot\bold{k}/2 - i(\hbar^2 \bold{k}\cdot\bold{k}/2m)t/\hbar} \\
&= (2\pi a)^{3/2} e^{-(a+i\hbar t/m)\bold{k}\cdot\bold{k}/2}.\end{align}
The inverse Fourier transform is still a Gaussian, but now the parameter a has become complex, and there is an overall normalization factor.[5]
\Psi(\bold{r},t) = \left({a \over a + i\hbar t/m}\right)^{3/2} e^{- {\bold{r}\cdot\bold{r}\over 2(a + i\hbar t/m)} }.
The integral of Ψ over all space is invariant, because it is the inner product of Ψ with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy eigenstate η(x), the inner product,
\langle \eta | \psi \rangle = \int \eta(\bold{r}) \psi(\bold{r})d^3\bold{r},
only changes in time in a simple way: its phase rotates with a frequency determined by the energy of η. When η has zero energy, like the infinite wavelength wave, it doesn't change at all.
The integral ∫|Ψ|2d3r is also invariant, which is a statement of the conservation of probability. Explicitly,
P(r)= |\Psi|^2 = \Psi^*\Psi = \left( {a \over \sqrt{a^2+(\hbar t/m)^2} }\right)^3 ~ e^{-{\bold{r}\cdot\bold{r} a \over a^2 + (\hbar t/m)^2}},
in which √a is the width of P(r) at t = 0; r is the distance from the origin; the speed of the particle is zero; and the time origin t = 0 can be chosen arbitrarily.
The width of the Gaussian is the interesting quantity which can be read off from the probability density, |Ψ|2,
\sqrt{a^2 + (\hbar t/m)^2 \over a}.
This width eventually grows linearly in time, as ħt/(m√a), indicating wave-packet spreading.
For example, if an electron wave packet is initially localized in a region of atomic dimensions (i.e., 10−10 m) then the width of the packet doubles in about 10−16 s. Clearly, particle wave packets spread out very rapidly indeed (in free space):[6] For instance, after 1 ms, the width will have grown to about a kilometer.
This linear growth is a reflection of the momentum uncertainty: the wave packet is confined to a narrow Δx=a/2, and so has a momentum which is uncertain (according to the uncertainty principle) by the amount ħ/2a, a spread in velocity of ħ/m2a, and thus in the future position by ħt /m2a. The uncertainty relation is then a strict inequality, very far from saturation, indeed! The initial uncertainty ΔxΔp = ħ/2 has now increased by a factor of ħt/ma.
The Airy wave train[edit]
In contrast to the above Gaussian wave packet, it has been observed[7] that a particular wave function based on Airy functions, propagates freely without envelope dispersion, maintaining its shape. It accelerates undistorted in the absence of a force field: ψ=Ai(B(xB³t ²)) exp(iB³t(x−2B³t²/3)). (For simplicity, ħ=1, m=1/2, and B is a constant, cf. nondimensionalization.)
Truncated view of time development for the Airy front in phase space. (Click to animate.)
Nevertheless, Ehrenfest's theorem is still valid in this force-free situation, because the state is both non-normalizable and has an undefined (infinite) x for all times. (To the extent that it can be defined, p⟩ = 0 for all times, despite the apparent acceleration of the front.)
In phase space, this is evident in the pure state Wigner quasiprobability distribution of this wavetrain, whose shape in x and p is invariant as time progresses, but whose features accelerate to the right, in accelerating parabolas B(xB³t ²) + (p/BtB²)² = 0,[8]
W(x,p;t)=W(x-B^3 t^2, p-B^3 t ;0)={1\over 2^{1/3} \pi B} ~ \mathrm{Ai} \left(2^{2/3} \left(Bx + {p^2\over B^2}- 2Bpt\right)\right).
Note the momentum distribution obtained by integrating over all x is constant. Since this is the probability density in momentum space, it is evident that the wave function itself is not normalizable.
Free propagator[edit]
The narrow-width limit of the Gaussian wave packet solution discussed is the free propagator kernel K. For other differential equations, this is usually called the Green's function,[9] but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of K.
Returning to one dimension for simplicity, when a is the infinitesimal quantity ε, the Gaussian initial condition, rescaled so that its integral is one,
\psi_0(x) = {1\over \sqrt{2\pi \epsilon} } e^{-{x^2\over 2\epsilon}} \,
becomes a delta function, δ(x), so that its time evolution,
K_t(x) = {1\over \sqrt{2\pi (i t + \epsilon)}} e^{ - x^2 \over 2it+\epsilon }\,
yields the propagator.
Note that a very narrow initial wave packet instantly becomes infinitely wide, but with a phase which is more rapidly oscillatory at large values of x. This might seem strange—the solution goes from being localized at one point to being "everywhere" at all later times, but it is a reflection of the enormous momentum uncertainty of a localized particle, as explained above.
Further note that the norm of the wave function is infinite, which is also correct, since the square of a delta function is divergent in the same way.
The factor involving ε is an infinitesimal quantity which is there to make sure that integrals over K are well defined. In the limit that ε→0, K becomes purely oscillatory, and integrals of K are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit ε→0 is to be only taken after the final state is calculated.
K_t(x,y) = K_t(x-y) = {1\over \sqrt{2\pi it}} e^{-i(x-y)^2 \over 2t} \, .
In the limit when t is small, the propagator, of course, goes to a delta function,
\lim_{t\rightarrow 0} K_t(x-y) = \delta(x-y) ~,
but only in the sense of distributions: The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero.
To see this, note that the integral over all space of K equals 1 at all times,
\int_x K_t(x) = 1 \, ,
since this integral is the inner-product of K with the uniform wave function. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit ε→0 is taken at the very end.
So the propagation kernel is the (future) time evolution of a delta function, and it is continuous, in a sense: it goes to the initial delta function at small times. If the initial wave function is an infinitely narrow spike at position y,
\psi_0(x) = \delta(x - y) \, ,
it becomes the oscillatory wave,
\psi_t(x) = {1\over \sqrt{2\pi i t}} e^{ -i (x-y) ^2 /2t} \, .
Now, since every function can be written as a weighted sum of such narrow spikes,
\psi_0(x) = \int_y \psi_0(y) \delta(x-y) \, ,
the time evolution of every function ψ0 is determined by this propagation kernel K,
\psi_t(x) = \int_{y} \psi_0(y) {1\over \sqrt{2\pi it}} e^{-i (x-y)^2 / 2t} \, .
Thus, this is a formal way to express the fundamental solution or general solution. The interpretation of this expression is that the amplitude for a particle to be found at point x at time t is the amplitude that it started at y, times the amplitude that it went from y to x, summed over all the possible starting points. In other words, it is a convolution of the kernel K with the arbitrary initial condition ψ0,
\psi_t = K * \psi_0 \, .
Since the amplitude to travel from x to y after a time t+t' can be considered in two steps, the propagator obeys the composition identity,
\int_y K(x-y;t)K(y-z;t') = K(x-z;t+t')~ ,
which can be interpreted as follows: the amplitude to travel from x to z in time t+t' is the sum of the amplitude to travel from x to y in time t, multiplied by the amplitude to travel from y to z in time t', summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral.[10]
Analytic continuation to diffusion[edit]
The spreading of wave packets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is randomly walking, the probability density function at any point satisfies the diffusion equation (also see the heat equation),
{\partial \over \partial t} \rho = {1\over 2} {\partial^2 \over \partial x^2 } \rho ~,
A solution of this equation is the spreading Gaussian,
\rho_t(x) = {1\over \sqrt{2\pi t}} e^{-x^2 \over 2t} ~,
and, since the integral of ρt is constant while the width is becoming narrow at small times, this function approaches a delta function at t=0,
\lim_{t\rightarrow 0} \rho_t(x) = \delta(x) \,
again only in the sense of distributions, so that
\lim_{t\rightarrow 0} \int_x f(x) \rho_t(x) = f(0) \,
for any smooth test function f.
K_{t+t'} = K_{t}*K_{t'} \, ,
which allows diffusion to be expressed as a path integral. The propagator is the exponential of an operator H,
K_t(x) = e^{-tH} \, ,
which is the infinitesimal diffusion operator,
H= -{\nabla^2\over 2} \, .
K_t(x,x') = K_t(x-x') \, .
Translation invariance means that continuous matrix multiplication,
C(x,x'') = \int_{x'} A(x,x')B(x',x'') \, ,
is essentially convolution,
C(\Delta) = C(x-x'') = \int_{x'} A(x-x') B(x'-x'') = \int_{y} A(\Delta-y)B(y) \, .
The exponential can be defined over a range of ts which include complex values, so long as integrals over the propagation kernel stay convergent,
K_z(x) = e^{-zH} \, .
As long as the real part of z is positive, for large values of x, K is exponentially decreasing, and integrals over K are indeed absolutely convergent.
The limit of this expression for z approaching the pure imaginary axis is the above Schrödinger propagator encountered,
K_t^{\rm Schr} = K_{it+\epsilon} = e^{-(it+\epsilon)H} \, ,
which illustrates the above time evolution of Gaussians.
From the fundamental identity of exponentiation, or path integration,
K_z * K_{z'} = K_{z+z'} \,
holds for all complex z values, where the integrals are absolutely convergent so that the operators are well defined.
Thus, quantum evolution of a Gaussian, which is the complex diffusion kernel K,
\psi_0(x) = K_a(x) = K_a * \delta(x) \,
amounts to the time-evolved state,
\psi_t = K_{it} * K_a = K_{a+it} \, .
This illustrates the above diffusive form of the complex Gaussian solutions,
\psi_t(x) = {1\over \sqrt{2\pi (a+it)} } e^{- {x^2\over 2(a+it)} } \, .
See also[edit]
1. ^ By contrast, the introduction of interaction terms in dispersive equations, such as for the quantum harmonic oscillator, may result in the emergence of envelope-non-dispersive, classical-looking solutions—see coherent states: Such "minimum uncertainty states" do saturate the uncertainty principle permanently.
External links[edit] |
898f7e28b8fd02d0 | Source: Facebook
Date: pre-2014
(see too: 1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9)
Press to reset the world
Unsorted Postings
David Pearce
negative utilitarianism, utilitronium shockwaves, veganism, physicalism, consciousness, suffering, transhumanism
[on classical versus negative utilitarianism]
A response to Toby Ord's essay
Why I Am Not A Negative Utilitarian
Toby, a few thoughts...
1) World destruction? You write, "...a thoroughgoing Negative Utilitarian would support the destruction of the world (even by violent means)". No, a thoroughgoing classical utilitarian is obliged to convert your matter and energy into pure utilitronium, erasing you, your memories and indeed human civilisation. By contrast, the negative utilitarian believes that all our ethical duties will have been discharged when we have phased out suffering. Thus a negative utilitarian can support creating a posthuman civilisation animated by gradients of intelligent bliss where all your dreams come true. By contrast, the classical utilitarian is obliged to erase such a rich posthuman civilisation with a utilitronium shockwave. In practice, I don't think it's ethically fruitful to contemplate destroying human civilisation, whether by thermonuclear Doomsday devices or utilitronium shockwaves. Until we understand the upper bounds of intelligent agency, the ultimate sphere of responsibility of posthuman superintelligence is unknown. Quite possibly, this ultimate sphere of responsibility will entail stewardship of our entire Hubble volume across multiple quasi-classical Everett branches, maybe extending even into what we naively call the past (cf. "The Two-State Vector Formalism of Quantum Mechanics: an Updated Review": In short, we need to create full-spectrum superintelligence.
2) Negative utilitarians can (and do!) argue for creating immense joy and happiness. Indeed, other things being equal, negative utilitarians are ethically bound to do so. For the thought of a painless but joyless world strikes most people as depressing. Negative utilitarians are committed to phasing out even the faintest hint of disappointment! The prospect of an insipid pain-free life without peak experiences - mere muzak and eating potatoes, so to speak - sounds bleak. If a thought or deed causes the slightest unease or distress, then other things being equal, that thought or deed is not expressive of negative utilitarianism.
3) You write, "Absolute NU is a devastatingly callous theory". No: NU, is a unsurpassably compassionate theory. You are "callous" only if you are indifferent to someone's suffering, not if you don't act to amplify the well-being of the already happy or act to create happiness de novo - although in practice, negative utilitarians should promote intelligent superhappiness too. Inducing sadness or disappointment is not NU.
[I think the force of your example depends on an untenable metaphysics of personal identity. If instead we use a more empirically supportable ontology of here-and-nows strung together in different sequences thanks to natural selection, no one is harmed by waking up happy in the morning rather than superhappy like their namesake the night before. So this is really an issue of population ethics, normally reckoned a different topic.]
Let us compare the callousness / compassion of classical utilitarianism and NU.
Since we're doing thought-experiments, imagine if a magic genie offers me super-exponential growth in my bliss at the price of exponential growth in your agony and despair. If I'm a classical utilitarian, then I am ethically bound to accept the genie's offer. Each year, your torment gets unspeakably worse as my bliss becomes ever more wonderful. Indeed, the thought I'm ethically doing the right thing increases my bliss even further! By generating so much net bliss, I'm the most saintly person who ever existed! If you knew how incredibly superhumanly wonderful I'm feeling, then you'd realise that my super-bliss easily offsets your tortured despair. Your tortured despair is a trivial pinprick in comparison to my super-exponentially growing bliss!
Of course, as a real-life negative utilitarian, I'd politely decline the genie's offer.
But if you win me over to classical utilitarianism, I'll accept.
Which is the callous choice?
Classical utilitarianism offers perhaps the best hope of cheating Hume's Guillotine and naturalising value. But does it maximise moral value? Or something else?
* * *
Only follow this link if you're not already convinced of NU:
One of the many reasons for scepticism that we're living in an "ancestor-simulation". Something similar to heaven and hell may be all too real. The most that moral agents can do is try to minimise the latter by developing posthuman superintelligence; and ultimately - I hope - forget about the existence of sub-hedonic zero states altogether.
Sadomasochism as a counterexample? I agree with Ole Martin here. In a minority of people, consensual sadomasochistic role-play can induce the release of intensely rewarding endogenous opioids. In the long run, however, I think such "mixed" states can be replaced by gradients of pure unadulterated bliss.
* * *
"You can't party with a negative utilitarian"
(Andrés Gomez Emilsson")
Andrés, a negative utilitarian can want you to be outrageously happy. And he certainly shouldn't wantonly cause you the slightest distress. In the future, I hope we can party all day and all night. But if you were drowning and the revellers on the beach were negative utilitarians, you could count on NUs to wade in and pull you out, whereas the classical utilitarians would be getting out their pocket calculators and totting up the fun forgone. Hopefully, this knowledge can make partying with negative utilitarians more enjoyable.
[OK, for expository purposes I over-simplify]
Andrés, a kindly classical utilitarian genie offers me super-exponentially increasing bliss at the price of your exponentially increasing despair. As a NU, I politely decline. If I were a CU, I'd be bound to say yes. With whom would you rather party?
[...]Yes, true, prioritarianism comes in many flavours - depending on the range of weights the prioritarian assigns in his trade-offs. But if the notional genie offers me super-exponential increase in my bliss at the price of "merely" linear (or whatever) increase in your misery, then if I were a prioritarian, wouldn't I be bound to accept?
[on utilitronium shockwaves versus gradients of bliss]
Why is the idea of life animated by gradients of intelligent bliss attractive, at least to some of us, whereas the prospect of utilitronium leaves almost everyone cold? One reason is the anticipated loss of self: if one's matter and energy were converted into utilitronium, then intuitively the intense undifferentiated bliss wouldn't be me. By contrast, even a radical recalibration of one's hedonic set-point intuitively preserves the greater part of one's values, memories and existing preference architecture: in short, personal identity. Whether such preservation of self would really obtain if life were animated by gradients of bliss, and whether such notional continuity is ethically significant, and whether the notion of an enduring metaphysical ego is even intellectually coherent, is another matter. Regardless of our answers to such questions, there is a tension between our divergent response to the prospect of cosmos-wide utilitronium and intelligent bliss. People rarely complain that e.g. orgasmic sexual ecstasy lasts too long, and that regrettably they lose their sense of personal identity while orgasm lasts. On the contrary: behavioural evidence strongly suggests that most men in particular reckon sexual bliss is too short-lived and infrequent. Indeed if such sexual bliss were available indefinitely, and if it were characterised by an intensity orders of magnitude greater than the best human orgasms, then would anyone - should anyone - wish such ecstasy to stop? Subjectively, utilitronium presumably feels more sublime than sexual bliss, or even whole-body orgasm. Granted the feasibility of such heavenly bliss, is viewing the history life on Earth to date as a mere stepping-stones to cosmic nirvana really so outrageous?
The answer is far from obvious. For example, one might naively suppose that a negative utilitarian would welcome human extinction. But only (trans)humans - or our potential superintelligent successors - are technically capable of phasing out the cruelties of the rest of the living world on Earth. And only (trans)humans - or rather our potential superintelligent successors - are technically capable of assuming stewardship of our entire Hubble volume. Conceptions of the meaning of the term "existential risk" differ. Compare David Benatar's "Better Never To Have Been" with Nick Bostrom's "Astronomical Waste". Here at least, we will use the life-affirming sense of the term. Does negative utilitarianism or classical utilitarianism represent the greater threat to intelligent life in the cosmos? Arguably, we have our long-term existential risk-assessment back-to-front. A negative utilitarian believes that once intelligent agents have phased out the biology of suffering, all our ethical duties have been discharged. But the classical utilitarian seems ethically committed to converting all accessible matter and energy - not least human and nonhuman animals - into relatively homogeneous matter optimised for maximum bliss: "utilitronium".
Ramifications? Severe curtailment of personal liberties in the name of Existential Risk Reduction is certainly conceivable. Assume, for example, that the technical knowledge of how to create and deploy readily transmissible, 100% lethal, delayed-action weaponised pathogens leaks into the public domain. Only the most Orwellian measures - a perpetual global totalitarianism - could hope to prevent their use, whether by a misanthrope or an idealist. Such measures would most likely fail. By contrast, constitutively happy people would be incapable of envisaging the development and use of such a doomsday agent. The biology of suffering in intelligent agents is a deep underlying source of existential risk - and one that can potentially be overcome.
A theoretically inelegant but pragmatically effective compromise solution might be to initiate a utilitronium shockwave that propagates outside the biosphere - or realm of posthuman civilisation. The world within our cosmological horizon could then be tiled with utilitronium with the exception of a negligible island (or archipelago) of minds animated "merely" by gradients of intelligent bliss] One advantage of this hybrid option is that most refusniks would (presumably) be indifferent to the fate of inert matter and energy outside their lifeworld. Ask someone today whether they'd mind if some anonymous rock on the far side of the moon were converted into utilitronium and they'd most likely shrug.
In future, gradients of intelligent bliss orders of magnitude richer than today's peak experiences could well be a design feature of the post-human mind. However, I don't think intracranial self-stimulation is consistent with intelligence or critical insight. This is because it is uniformly rewarding. Intelligence depends on informational sensitivity to positive and negative stimuli - even if "negative" posthuman hedonic dips are richer and higher than the human hedonic ceiling.
In contrast to life animated by gradients of bliss, the prospect of utilitronium cannot motivate. Or rather the prospect can motivate only a rare kind of hyper-systematiser drawn to its simplicity and elegance. The dips of intelligent bliss need not be deep [...] Everyday hedonic tone could be orders of magnitude richer than anything physiologically feasible now. But will such well-being be orgasmic? Orgasmic bliss lacks - in the jargon of academic philosophy - an "intentional object". So presumably there will be selection pressure against any predisposition to enjoy 24/7 orgasms. By contrast, information-sensitive gradients of intelligent bliss can be adaptive - and hence sustainable indefinitely, allowing universe maintenance: responsible stewardship of Hubble volume.
At any rate, posthumans may regard even human "peak experiences" as indescribably dull by comparison.
* * *
Just as "computronium" is matter and energy optimised for maximal computing power, "utilitronium" is matter and energy optimised for maximum bliss. Utilitronium would presumably be propagated from its place of origin at close to the velocity of light via suitably programmed von Neumann probes etc. A utilitronium shockwave is potentially lethal to intelligent life with complex values because utilitronium is (generally assumed to be) a relatively homogeneous organisation of matter and energy. Counterarguments? Yes, but I'm not sure if they work. Classical utilitarianism has vastly more counterintuitive implications than the homely moral dilemmas of Trolleyology might suggest.
* * *
Quite a long countdown, I fear. But a twinkle in the eye of eternity.
Infinite Bliss? Countdown to a Utilitronium Shockwave
A talk by David Pearce
"Language is a virus from outer space"
(William S. Burroughs)
Inhale at your own risk...
I've never been to Nottingham, perhaps an unlikely venue to launch a utilitronium shockwave.
The ultimate in effective altruism - or a reductio ad absurdum of classical utilitarianism? The ramifications of taking a classical utilitarian ethic seriously do need to be explored - even if we ultimately reject them. My main anxiety in debating these issues is that "normal" people may associate the practical and compelling case for phasing out the biology of suffering on Earth with wild cosmological speculations about the far future of sentience.
Thanks Astro. What (if any) is the difference between a utilitarian shockwave and a hedonium shockwave? What would be a eudaimonian shockwave? (cf. Ethicists tend not to be much interested in cosmology and vice versa. But the ultimate fate of our Hubble volume may turn on our values. (No, I didn't get the chance to explore such issues this time. In public it's more fruitful to push for gradients of intelligent bliss than utilitronium shockwaves...
Computronium is matter and energy optimised for maximum computing power. Utilitronium is matter and energy optimised for maximum utility. Such matter and energy is often assumed to be relatively homogeneous. This remains to be shown. Advocacy of life animated by gradients of intelligent bliss is IMO technically and sociologically more credible than advocacy of a utilitronium shockwave as normally understood, i.e. utilitronium = hedonium. But maybe it's misunderstood...
Alexander, yes. "Human-unfriendly" AGI is sometimes illustrated by the idea of a paperclip maximiser. A well-known quote of Eliezer Yudkowsky runs, "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." Is the "something else" most credibly utilitronium? The idea of a cosmos tiled with paperclips is fanciful. Yet a utilitronium-saturated forward light-cone is within the bounds of possibility. Indeed by the lights of classical utilitarianism, some kind of cosmic orgasm would appear ethically mandatory. Critics might say this outcome is a reductio ad absurdum of classical utilitarian ethics.
Wise words Sasha. Thanks. Alas abolitionist bioethics sorely needs some larger-than-life figure who can take the project forward.
When can we expect the first bar-room brawl over the wisdom of launching a utilitronium shockwave:
("Russian man shot in quarrel over Immanuel Kant’s philosophy")
[on whether digital zombies can investigate sentience]
Neurons are indeed fabulously complex information processors. Thus e.g. the different amino acid sequences and secondary, tertiary and quaternary protein folding structures internal to the neuron may well be implicated in innumerable different microqualia. But once again, I'm at a loss to know how a digital computer could investigate the first-person/third person psychophysical mapping needed to understand their relationship. Even with the master equation of a formally complete Theory of Everything, you'll need to instantiate some of its solutions to understand them. "Mary" (cf.'s_room) - or a digital computer - never will understand redness. More to the point, ignorance of the phenomenal nature of pain and pleasure entails a digital computer will never understand why anything matters at all. Insidiously, the Church-Turing thesis has promoted an impoverished conception of what constitutes a well-defined problem an intelligent agent can investigate.
[...]We are certainly acutely aware of some of our logical stumbles. But I'd argue that one is most vividly aware of an evolutionarily ancient process that works extraordinarily well - and is completely beyond any digital computer, which is "not even stupid". The classical world is an artefact of quantum minds. What one naively calls "perceiving one's surroundings" actually entails generating "bound" and cross-modally matched experiential objects in a unitary world-simulation run by a (fleetingly) unitary self - and in almost real time to boot.
* * *
Can a cognitive agent be intelligent, let alone superintelligent, and yet fail to understand, or lack any capacity to investigate, fundamental features of the natural world? If the agent in question were constitutionally ignorant of the properties of, say, matter, energy and the second law of thermodynamics, then we would say no: such an agent is profoundly ignorant, or at best an idiot savant. Yet if the cognitive agent in question is constitutionally ignorant of the properties of, say, phenomenal objects, conscious minds, or the nature of pain and pleasure, then many AI researchers are nonetheless willing to ascribe intelligence - and potentially even superintelligence. IMO this is an anthropomorphic projection on our part.
How might the apologist for digital (super)intelligence respond? Several ways, I guess. Here are just two.
First, s/he might argue that the manifold varieties of consciousness are unimportant and/or causally impotent. Intelligence, and certainly not superintelligence, does not concern itself with trivia.
But in what sense are, say, the experience of agony or despair trivial, whether subjectively to their victim, or conceived as disclosing a feature of the natural world? Compare how, in a notional zombie world otherwise physically type-identical to our world, nothing would inherently matter at all. Some of our supposed counterparts might undergo boiling in oil, but who cares: they aren't sentient. By contrast to such a fanciful zombie world, the nature of phenomenal agonies as we undergo such states isn't trivial: indeed the thought that (1) I'm in unbearable agony and (2) the agony doesn't matter, is devoid of cognitive sense. And in any case, we can be sure that phenomenal properties aren't causally impotent epiphenomena. Epiphenomena, by definition, lack causal efficacy - and hence lack the ability physically or functionally to stir us to write and talk about their existence.
Second, the believer in digital (super)intelligence might claim that (some of the programs executed by) digital computers are conscious, or at least potentially conscious, not least future software emulations of human brains. For reasons we admittedly don't yet understand, some physical states of matter and energy, perhaps the different algorithms executed in various information processors, are identical with different states of consciousness, i.e. a functionalist version of the mind-brain identity theory is correct. Granted, we don't yet understand the mechanisms by which information processing generates consciousness. But whatever these consciousness-generating processes may turn out to be, materialism is correct. Biological and nonbiological agents alike can be conscious minds.
Unfortunately, there is an insurmountable problem here. Identity is not a causal relationship. We can't simultaneously claim that a conscious state is identical with a brain state and maintain that this brain state causes (or "generates", or "gives rise to" etc) the conscious state in question. Nor - and this is where Searle stumbles - can causality operate between what are only levels of description. Hence the Hard Problem of Consciousness and the Explanatory Gap. What I meant by denying that consciousness is a mere puzzle is that the solution to puzzles don't challenge our conceptual scheme. Thus a difficult crossword clue may stump us; but we may be confident the answer will leave our world-picture intact. By contrast, if we discovered a fairy living at the bottom of the garden, even a little one, then materialism would be falsified. Materialism is the thesis that the physical facts exhaustively constitute all the facts. The existence of consciousness - even a single instance of consciousness - falsifies materialism. Actually, not everyone would agree here. Radical eliminativism about consciousness has been described as the craziest theory in the history of philosophy; but eliminativists are right in one sense: given the ontology of physics as standardly understood, consciousness is impossible. Most of us find eliminativism literally incredible.
Anyhow, the conjecture I offer to resolve the mystery of conscious mind, involving a combination of Strawsonian physicalism plus macroscopic quantum coherence, may most likely be false. But it's empirically adequate, eliminates the Hard Problem and the Explanatory Gap, and predicts that digital computers will never be sentient. We shall see. :-)
Dustin, I guess we differ on whether or not our computers show abstractions can or can't have causal efficacy. Pragmatically, of course, it's hugely useful to pretend they do - just as pragmatically it's useful to think of the lump of silicon in front of me in terms of the different abstraction layers of a computer architecture (i.e. hardware, firmware, assembler, kernel, operating system and applications). But everything that takes place in your PC supervenes on microphysical interactions whose behaviour is exhaustively described by the physics of the Standard Model (or its ultimate successor). Reality only has one "level" - and that's where all the work gets done. As you know, I take an equally reductive approach to consciousness.
[on NU and life's potential galactic radiation]
Seemingly useless metaphysical debates can sometimes have profound ethical consequences. So I'm going to risk outlining my "philosophical" disagreements with Brian - even though ethically we agree on a lot! IMO consciousness, for example a phenomenal pain, is concrete, possessing spatio-temporal location and causal efficacy; an algorithm is abstraction. I'm sceptical about any ultimate ontology of abstract objects (what might actually cause us to credit their distance?), even though, if we don't treat abstract objects as real for some purposes, we will miss many features of the real world. [Perhaps compare our understanding of functionalist / teleological explanations pre- and post-Darwin.] Thus natural selection has recruited e.g. pains to play, typically, an information-processing role in living organisms. But neuropathic pain, for instance, that doesn't play any algorithmic or information-processing role in the organism is just as real as its "typical" counterpart. Consciousness, with or without any functional role, is not something mind/brains do: it's what they are. Panpsychism? Presumably, we'll ultimately need rigorously to derive the phenomenology of our minds from the properties of the fundamental stuff of the world - a reductive physicalism with no strong emergence, i.e. no unexplained eruption into the world of something ontologically new, not expressible within the mathematical straightjacket of modern physics. I don't think pan-experientialism / Strawsonian physicalism can do this, or rather not on its own. Hence the seemly intractable binding problem and the classically inexplicable existence of "bound" phenomenal objects and the (fleeting, synchronic) unity of consciousness - and the desperate-sounding proposals that quantum mind theorists have devised to overcome the problem. But I don't think any proposal to solve the binding problem consistent with reductive physicalism can even get off the ground unless we assume a pan-experientialist / Strawsonian physicalist ontology. Such an ontology is the precondition of a reductive explanation of phenomenal minds, not an explanation itself. Two grounds for taking pan-experientialism / Strawsonian physicalism seriously IMO are 1) the fundamental entities in theoretical l physics (fields / superstrings / branes) are defined purely mathematically; their supposed insentience is an extra assumption, not integral to the physics. And (2) the only part of the world to which one had direct access, namely one's own mind/ brain , has precisely those attributes that the pan-experientialist / Strawsonian physicalist claims - contrary to one's naive materialist or abstract pan-informationalist intuitions. More to be said? Yes, for sure. :-)
* * *
Adriano, could things go wrong? Yes. All the more reason to hardware the default settings of tomorrow's hedonic floor above today's hedonic ceiling. Lots can go wrong in posthuman paradise, but let's ensure that the worst catastrophes aren't as bad as today's peak experiences. If the rest of our Hubble volume is sterile, then the ethical utilitarian is simply obligated to ensure it doesn't recur. However, sociologically, negative utilitarianism seems an unlikely value system to prevail - and who knows what might be thrown up by experiments in colonisation of other solar systems. But maybe the molecular signature of unpleasantness belongs to a tiny subset of possible modes of sentience - and (super)intelligent humans will no more re-create such a relic than they may decide to reintroduce, say, smallpox. Pain and suffering have defined life on Earth for so long that it's natural to assume they will feature in our conceptual scheme indefinitely - and assume that the risk of the inadvertent or even deliberate re-creation of suffering will endure, even if we phase it out. Talk of post-humans opting to run ancestor situations, colonising barren solar systems with Darwinian life, the spectre of digital sentience, creating baby universes (etc) reflects this worry. I agree every kind of catastrophic risk scenario should be rigorously explored. But it's likely IMO that experience below hedonic zero will be relegated to the Dark Ages and its re-creation inconceivable. Until we understand more, however, all I can do is say this needs more research. Lame but true.
* * *
Calling consciousness a Hard Problem for the materialist is like calling fossils a Hard Problem for the biblical literalist, i.e. true as far as it goes, but not an adequate expression of the magnitude of challenge. If we're prepared to endorse pan-experientialism / Strawsonian physicalism, then the prospect of digital sentience and mind uploading might seem more feasible. Indeed, IMO pan-experientialism /Strawsonian physicalism is the precondition of any explanation of how organic minds solve the phenomenal binding problem. But it's merely a precondition. On the face of it, at least, the existence of bound phenomenal objects and the fleeting synchronic unity of the self is inconsistent with reductive physicalism. For what it's worth, I'm a Strawsonian physicalist who predicts on theoretical grounds that classical computers will never be subjects of experience nor support anything other than digital zombies. David Chalmers has recently written a nice overview of the binding problem: - though I wouldn't accept his naturalistic dualism. Again, for what it's worth I think the way organic minds solve the binding problem is the greatest cognitive achievement of the past half-billion years.
[on the near-term future]
The history of futurology is not encouraging. Most "predictions" by futurists are more akin to prophecies that reveal more about personality, preoccupations and capacity for wish-fulfilment of the author than the future they purport to describe. In any case, predicting the future behaviour of self-reflexive agents is not like predicting the behaviour of non-intelligent physical systems. Some predictions are self-fulfilling; other predictions are self-stultifying; and the public forecasts of politicians, social scientists, singularitarians and transhumanists should all be viewed in this light.
With this in mind, here goes....
I probably sound a naive optimist. I anticipate a future of paradise engineering. One species of recursively self-improving organic robot is poised to master its own genetic source code and bootstrap its way to full-spectrum superintelligence. The biology of suffering, ageing and disease will shortly pass into history. A future discipline of compassionate biology will replace conservation biology. Our descendants will be animated by gradients of genetically preprogrammed bliss orders of magnitude richer than anything physiologically accessible today. A few centuries hence, no experience below "hedonic zero" will pollute our forward light-cone.
Existential risk?
I think the greatest underlying source of existential and global catastrophic risk lies in male human primates doing what evolution "designed" male human primates to do, namely wage war. (cf. ) Unfortunately, we now have thermonuclear weapons to do so.
Does the study of ERR diminish or enhance ER? One man's risk is another man's opportunity. 2) Is the existence of suffering itself a form of ER insofar as it increases the likelihood of intelligent agency pressing a global OFF button, cleanly or otherwise? If I focussed on ERR, phasing out suffering would be high on the To Do list.
Well, I'd argue it's a form anthropomorphic projection on our part to ascribe intelligence or mind to digital computers. Believers in digital sentience, let alone digital (super)intelligence, need to explain Moravec's paradox.
(cf.'s_paradox) For sure, digital computers can be used to model everything from the weather to the Big Bang to thermonuclear reactions. Yet why is, say, a bumble bee more successful in navigating its environment in open-field contexts than the most advanced artificial robot the Pentagon can build today? The success of biological lifeforms since the Cambrian Explosion has turned on the computational capacity of organic robots to solve the binding problem (cf. and generate cross-morally matched, real-time simulations of the mind-independent world. On theoretical grounds, I predict digital computers will never be capable of generating unitary phenomenal minds, unitary selves or unitary virtual worlds. In short, classical digital computers are invincibly ignorant zombies. (cf. They can never "wake up" and explore the manifold varieties of sentience.
Suffering has been a pervasive feature of life on Earth over the past half billion years - and its existence is deeply embedded in our conceptual scheme. So it's natural to extrapolate and project its extension far and wide as intelligent life radiates across the Galaxy and beyond. However, perhaps the core molecular structures implicated in experience below hedonic zero occupy a comparatively small, well-defined and restrictive class of states that posthumans would no more recreate than the cuckoo-clock. In absolute terms, the kingdom of pain may well be as extensive as the kingdom of pleasure. IMO our overriding priority should be making its existence - and eventually perhaps even knowledge of its existence - completely off-limits. Yes, potentially hundreds of billions of years of sublime bliss lie ahead in our forward light-cone. But we need to avoid getting stuck in some sub-optimal local maximum. Our superhappy posthuman descendants will presumably feel that the Darwinian horror story that spanned them was all worthwhile - insofar as they choose to contemplate its existence at all. But indescribable joy and suicidal despair cannot simultaneously be compared. Despite their (presumably) vastly superior intelligence, I suspect aspects of our existence will be cognitively closed to them. Worthwhile? I don't believe it either. Recall Ursula Le Guin's "The Ones Who Walk Away from Omelas". Despite advocating - and tentatively predicting - a future life of sublime bliss - I'm not remotely convinced it's worth the price.
* * *
Consider in turn each of our core emotions. What was its role in the ancestral environment? Do we want to preserve, enrich, diminish, or abolish the emotion altogether? And can we genetically design novel emotions too?
* * *
Year 2047
The bad news?
I fear we're sleepwalking towards the abyss. Some of the trillions of dollars of weaponry we're stockpiling designed to kill and maim rival humans will be used in armed conflict between nation states. Tens of millions and possibly hundreds of millions of people may perish in thermonuclear war. Multiple possible flash-points exist. I don't know if global catastrophe can be averted. For evolutionary reasons, male humans are biologically primed for competition and violence. Perhaps the least sociologically implausible prevention-measure would be a voluntary transfer of the monopoly of violence currently claimed by state actors to the United Nations. But I wouldn't count on any such transfer of power this side of Armageddon.
The good news?
Freeman Dyson prophesies that soon we'll "be writing genomes as fluently as Blake and Byron wrote verses". If so, I'm not sure about timescales. However, "narrow" artificial intelligence and powerful gene-authoring software tools will shortly enable humans to edit our own genetic source code in accelerating cycles of recursive self-improvement. In consequence, human intelligence will be progressively amplified and enriched. Youth, vitality and lifespans will be extended indefinitely. Suffering, depression and experience below "hedonic zero" will be relegated to history. Human traits such as weakness of will, the struggle for meaning and significance, quasi-sociopathic empathy deficits, and a host of mediocre states of mind that currently pass for mental health will increasingly become optional as we bootstrap our way to post-humanity. Not least, a growing mastery of our biological reward circuitry will allow the upper bounds of human "peak experiences" to be pushed unimaginably higher. Likewise, hedonic set-points can be genetically recalibrated. Everyday life later this century will potentially be animated by gradients of intelligent bliss.
Bioconservative critics will doubtless worry that "something valuable will be lost" as responsible prospective parents stop playing genetic roulette as the reproductive revolution of "designer babies" unfolds. Tomorrow's parents-to-be will opt for preimplantation genetic diagnosis and "designer zygotes" to ensure invincible physical and mental health for their future children. Among young adults, novel states of consciousness as different as waking from dreaming are likely to migrate from psychedelic chemists working in the scientific counterculture to mainstream society. "Bad trips" will become physiologically impossible because their molecular signature is absent. Unfortunately, words fail here. Post-Darwinian consciousness is likely to be incomprehensible to archaic Homo sapiens.
Ethically, I think the greatest ethical change ahead this century may be the antispeciesist revolution. This global transition will probably follow rather than precede the commercialisation of gourmet in vitro meat and the end of factory farming and the death factories. It's worth stressing that the antispeciesist doesn't claim members of all species are of equal value. S/he argues simply that beings of equivalent sentience are of equal value. Hence they deserve to be treated accordingly - regardless of gender, race or species. Pigs, sheep and cows are of equivalent sentience to human infants, prelinguistic toddlers, victims of Alzheimer's disease and the severely intellectually handicapped. Only arbitrary anthropocentric bias leads us to kill, abuse and exploit the former and care for the latter. Despite superior intelligence, I suspect our grandchildren may struggle to comprehend what their grandparents did to other sentient beings.
[on engineering a happy biosphere]
Here is the PDF and PowerPoint of "Conservation Biology versus Compassionate Biology" talk I gave in Santa Maria:
Although the two approaches are here contrasted, they can in principle be combined. But where to strike a balance? It's the last and most technically ambitious stand of the abolitionist project - leading ultimately to some kind of high-tech Jainism.
The elephant case study relies on (my) back-of-an-envelope calculations rather than a rigorous methodology. But the $2.5 billion annual cost of full healthcare and welfare provision for the entire population of free-living African elephants may be a bit pessimistic: one just needs to consider cost overruns. The great majority of the 500, 000 elephant population would need far less than the $5000 per head this figure allows. Chipping /GPS tracking and immunocontraception would presumably cost at most a few hundred dollars. What's feasible for all UK "domestic" dogs
is feasible for free-living elephants. Chipping can range from simple tagging
to more complex remote monitoring of health status.
(e.g. cortisol monitoring - elevated cortisol levels indicating high stress and consequent need for investigation and possible compassionate intervention.) Late-life orthodontics to prevent starvation would be more costly. But the kinds of material used for
would last decades. Timescale for the 500,000 population? Perhaps 1-2 years (?) if an international consensus existed.
I chose the African elephant because s/he has the largest brain of a terrestrial vertebrate, and all the necessary technologies for a comprehensive healthcare program are available now - nothing transhumanist or "sci-fi". In a number of ways, free-living elephants are an "easy" example. Elephants are large, long-lived, vegetarian, and "charismatic". No seemingly irreconcilable interests are involved (e.g. lions versus zebras) because mature elephants typically have no natural predators: the limiting factor on elephant populations in the absence fertility regulation is food/adequate nutrition. The one exception I know to this generalisation is the terrible case of:
But this is the kind of horror that a compassionate stewardship of Nature would prevent.
It's worth distinguishing between "wild" and "free living". For the most part, humans are no longer the former, but we are mostly the latter. There is no technical reason why we can't extend the principles of the Swedish model to free-living members of other species. (cf.
* * *
Reeve, utilitarians believe we should phase out suffering, and members of this HI group are committed to phasing out involuntary suffering. But nothing in our core statement of principles - or the core statement of principles of the original WTA /H+ - involves a commitment to utilitarian ethics. The transhumanist movement embraces classical, negative and preference utilitarians; deontologists; virtue ethicists, pluralists; Christians; Jains; Buddhists; and much else besides. What's new isn't our ethic of reducing - and ultimately phasing out - the ancient biology of involuntary suffering, but rather the technology to turn utopian dreaming into practical reality.
* * *
What the British call Heath Robinson devices and Americans call Rube Goldberg machines strike us as comically absurd. Yet post-humans may recognise that even more irrational inefficiency is built into humanity's "natural" neural reward machinery for generating pleasure. That said, adopting the equivalent of wireheading or mainlining heroin is currently neither ethical nor prudent.
[on high-tech Jainism]
Brian raised an important point. "I don't think I would regard insect suffering as a pinprick. To the insect herself, her suffering is all that matters - it overwhelms everything."
For some insects at least, presumably this can't be the case, e.g. in those species of locust where the head segment may carry on eating where the tail segment is being devoured by a predator. There can't be a unitary subject of experience in such cases. Maybe there is raw pain in the ganglia of the tail segment (and indeed in the segment of a writhing lizard tail that has detached to distract a predator, etc). But if so, to what extent does such separate pain experience have an emotional, "affective" aspect? (Compare too how when a patient in pain is given morphine, s/he reports the pain sensation is still there as before, but the sensation doesn't seem to matter any more).
It's also possible, I think, that some human peripheral nerve ganglia experience raw pain that our minds can't access. Compare how one sometimes withdraws one's hand from hot stove before the felt experience of phenomenal agony. Maybe phenomenal pain does exist in some our peripheral nerve ganglia, and maybe such pain plays a role in our hand withdrawal, but it is inaccessible to the CNS.
If some/all insect pain is "encapsulated" in an analogous way, then is tackling its existence more or less urgent than treating inaccessible pains in the human peripheral nervous system? (My intuition is that vertebrate CNS (and cephalopods) come first; but I'm also aware mere intuitions are frequently worthless.)
* * *
Ben, IMO de-biasing ourselves is a constraint on decision-theoretic rationality, not just morality. To be sure, for evolutionary reasons each of us tends to find ourselves - perceptually and evaluatively - at the centre of the universe, followed typically in importance by family, friends and members of own ethnic group, with sentient beings of other species featuring marginally if at all. But if we aspire to the God's-eye point-of-view delivered by modern science, then we'll recognise that the egocentric illusion is just that - a fitness-enhancing falsehood. Insofar as one's own agony or despair is disvaluable, then so is the agony and despair of other subjects of experience elsewhere in space-time, regardless of race or species. This particular here-and-now isn't somehow special or ontologically privileged.
Vegard, a refusal to condone animal abuse isn't the prerogative of white, young, urban, privileged males. Rather an ethic of non-violence to other sentient beings is central to the values of millions of the poorest people on the planet. I know of no good ethical reason why we should permit sentient beings to be parasitized, starved, disembowelled, asphyxiated or eaten alive, regardless of race or species. Thanks to biotechnology, such cruelties will shortly become optional. Of course, building sentience-friendly biological intelligence is still a formidable challenge - but perhaps a useful apprenticeship for building friendly AI.
Carnivorous plants? Garret , about your "of course" Venus fly traps are sentient. If so, their sentience would blow my preferred theory of mind out of the water! [Their sapience would leave me dumbstruck.] Venus flytraps don't have a nervous system. Can aggregates of cellulose-encased plant cells ever become unitary subjects of experience? But this question would take us far afield - and ranks low on any scale of moral urgency.
* * *
Some cognitive biases are ethically catastrophic:
Meat eaters downplay animal minds
How should we treat other sentient beings?
Farm to Fridge
The Truth Behind Meat Production
The bedrock of human civilisation is the misery of other sentient beings. What should we do about it?
FBI Says Activists Who Investigate Factory Farms Can Be Prosecuted as Terrorists
Killing other sentient beings: H+ or H- ?
Vegans, notables, celebs and the abolition of suffering
* * *
A biosphere without suffering is technically feasible. In principle, science can deliver a cruelty-free world that lacks the molecular signature of unpleasant experience. Not merely can a living world support human life based on genetically preprogrammed gradients of human well-being. If carried to completion, the abolitionist project entails ecosystem redesign, immunocontraception, marine nanorobots, rewriting the vertebrate genome, and harnessing the exponential growth of computational resources to manage a compassionate global ecosystem. Ultimately, it's an ethical choice whether intelligent moral agents opt to create such a world - or instead express our natural status quo bias and perpetuate the biology of suffering indefinitely.
Conservation Biology versus Compassionate Biology
Since the Cambrian explosion, pain and suffering have been inseparable from the existence of life on Earth. However, a major evolutionary transition is now in prospect. One species of social primate has evolved the capacity to master biotechnology, rewrite its own genetic source code, and abolish the molecular signature of experience below "hedonic zero" throughout the living world. This talk explores one aspect of the evolutionary transition ahead, namely interventions to phase out the cruelties of Nature. The exponential growth of computer processing power promises to let us micro-manage every cubic metre of the planet. Responsible stewardship of tomorrow's wildlife parks will entail cross-species fertility regulation via immunocontraception, "reprogramming" predators, famine relief, healthcare provision, and eventually a pan-species analogue of the welfare state. Can science and technology engineer the well-being of all sentence in our forward light-cone?
[on materialism]
Materialism and physicalism are often assumed to be close cousins. But one can be a physicalist and a monistic idealist. Physicalism is the conjecture that no "element of reality" is lacking from the equations of physics and their solutions. Materialism and idealism are conjectures about the intrinsic nature of what this mathematical formalism exhaustively describes. Less intuitively still, the conjecture that e.g. fields of quantum-field theoretic subjectivity are the stuff of the world does not commit us to the view that classical digital computers - or the population of the USA, etc - are (potentially) subjects of experience. For we still need to resolve the phenomenal binding problem - which seems classically insoluble. Why aren't communities of discrete membrane-bound (supposedly) classical neurons merely patterns of "mind dust" - just as the USA (contra Eric Schwizgebel) is never a unitary subject of experience, just patterns of discrete skull-bound minds?
Brian, let's suppose that - in the interests of an experimental science of consciousness - the population of the USA (etc) is co-opted into replicating the functional properties of a simple brain as conceived by coarse-grained functionalism - with skull-bound American minds acting as proxies for membrane-bound neurons, and rapid electromagnetic signalling acting as proxy for the diffusion of neurotransmitters across the synaptic cleft. For the experiment, let's assume that skull-bound [membrane-bound] units forming columns of interconnected edge-detectors, motion-detectors, colour-detectors are experimentally assembled, as in the CNS.
At no threshold of complexity or functionality does a unitary dynamic phenomenal object, or a unitary experiential field of perception, or a unitary subject of experience, switch on, or even "seem" to switch on ["seem" to what or to whom?].
A critic of this thought-experiment might echo Eric Schwitzgebel and respond: how do we know they don't somehow switch on? How can we know that all that ever exists are patterns of discrete, classical, skull-bound pixels of "mind dust"?
And I'd respond: we don't!
But their "switching on" would be a strong form of ontological emergence: an unbridgeable explanatory gap that would demolish reductive physicalism and the ontological unity of science.
By contrast, all the other forms of emergence in the natural world are "weak", i.e. they are all derivable from the underlying microphysics.
At this point, I guess I could go into my own implausible-sounding conjectures on how phenomenal binding may be accomplished in the CNS of organic mind-brains. But only someone who recognises that phenomenal binding is a profound problem will be tempted to give them even cursory consideration.
I'd just add that finding phenomenal binding to be huge problem is not some idiosyncratic view on my part.
[My ideas on quantum mind certainly do fall into the idiosyncratic category.]
[on the primordial functionality of the pleasure-pain axis]
Phenomenal redness and greenness - and most if not all other experiences - are intrinsically functionally neutral. We normally think of redness and greenness as being bound up with different electromagnetic reflectancies, but this functional role is a purely contingent fact about evolution. Inverted spectra, absent qualia and the recruitment by natural selection of radically different qualia (cf. synaesthesia) to play an analogous functional role in living organisms are all feasible. By contrast, the pleasure-pain axis is not inherently functionally neutral. An inverted pleasure-pain axis seems physically and indeed logically impossible. There could not be an extra-terrestrial civilisation whose members are drawn to pure agony and despair, and shun pure ecstasy and bliss. Not least, an inverted pleasure-pain axis would be inconsistent with agency, and of choosing to do one thing rather than another. Without going into a deep examination of the pleasure principle/ psychological hedonism, the various human counterexamples one can think of to this claim don't IMO on examination add up. But if this is so, why is it so? I sometimes say that the pain-pleasure axis discloses the world's inbuilt metric of (dis)value. But what explains this (unique?) functional role? Lamely, I'm just going to say I don't know. Unfortunately, I don't think we understand the nature of Reality.
[on different conceptions of a Technological Singularity]
DEBATE FORUM - do you want a Singularity via Artificial Intelligence, or Human Bio-Intelligence?
Most of the Singularitarian contributors to the forthcoming Springer volume discount the prospects of biological superintelligence:
But organic robots can recursively self-modify their own genetic source code too. So I think writing off humans and our biological descendants may prove premature:
Eray, the smarter our tool AI, the faster such AI can enable recursively self-improving organic robots to design autosomal gene-authoring software and smart neuroelectronic interfaces. So in that sense, Moore's law will directly benefit biological robots too. But as you know, I'm sceptical on theoretical grounds that classical digital computers can ever be more than zombies. And this means there are a huge diversity of problems classical digital computers can never solve. How would you program a classical digital computer, for example, to explore the neural correlates of consciousness, solve the phenomenal binding problem, or investigate novel state spaces of qualia à la Sasha Shulgin? (cf.
I guess a standard response would be to invoke the Church–Turing thesis and claim that such problems aren't well-defined. But we need a richer conception of what constitutes a well-defined problem. Full-spectrum superintelligence, IMO, will seamlessly integrate the formal and subjective properties of mind.
[...] the conjecture that our superhumanly intelligent successors will not also be our biological descendants rests on contentious assumptions that may - or may not - be correct. Not least, predicting the behaviour of reflexive agents whose behaviour itself depends on the nature of the predictions one makes poses paradoxes that do not arise when predicting the behaviour of systems in the rest of the natural world.
Eray, yes, I agree, the IJ. Good / Eliezer Yudkowsky conception of an Intelligence Explosion, which combines Moore's law with the prospect of recursively self-improving software-minds, predicts the eruption of nonbiological superintelligence orders of magnitude smarter than even posthuman biological intelligence within months, weeks, hours, perhaps even minutes... FOOM.
Or FIZZLE? As you know, on theoretical (and empirical) grounds, I don't think digital zombies are intellectually capable of full-spectrum superintelligence - or tackling the sort of problems (some) humans find interesting. For example, when do you think a classical digital computer will outperform Shulgin and his successors? To embark on an intelligent investigation of the phenomenal properties of certain configurations of matter and energy requires, at the very minimum, that one possesses an understanding of what is a phenomenal property. Or do you believe that a classical digital zombie can understand the nature of sentience? Are you claiming that a Christof Koch or a Sasha Shulgin are not pursuing intelligent scientific inquiry?
Eray, IMO we have good grounds for believing that the Strong Physical Church-Turing Thesis (i.e. any function that can actually be computed in polynomial time by a physical device can also be computed in polynomial time by a Turing machine) is false. If you intend to argue otherwise, then I look forward to reading your paper! In reality, there are most likely physical devices that are exponentially more efficient than a Turing Machine, and in consequence - as Feynman considered in his original paper - exponentially hard to simulate. These devices include IMO the organic mind-brains of biological robots whose world-simulations have been computationally optimised over hundreds of millions of years to track fitness-relevant patterns in the mind-independent local environment. I'm happy to concede this view is controversial; but this doesn't make those who disagree with you "cabbage heads".
Ronald, yes, let's say we want to build a machine capable of indexical thought (e.g. this particular thought). We know precisely how to create a machine to achieve this precisely specified task: just have sex and create another organic robot! But programming a digital computer with a capacity for indexical thought (or the capacity for phenomenal binding, or to investigate psychedelia, or to probe the neural correlates of consciousness, etc) is more of a challenge. Indeed, it's not clear that a classical digital computer can exhibit understanding of the nature or even the existence of the subjective properties of matter and energy. On the other hand, the distinction between the formal and subjective properties of mind cannot be entirely clean - or else it wouldn't be physically feasible to allude to such subjective properties in the first instance. Either way, full-spectrum superintelligence will entail seamless mastery of the formal and subjective properties of mind - or so I'd argue at any rate.
[on one unresearched route to full-spectrum superintelligence]
Recursive genetic self-editing of adult genomes is just a fantasy - for now at least. [cf.]
However, consider the imminence of a Chinese eugenics program to breed super-geniuses.
And let's assume human cloning will soon be feasible.
Here's a thought-experiment...
Ignoring ethical considerations, imagine if you could be genetically cloned with, critically, the addition and/or modification of a handful of "ultra intelligent" alleles - and your genetically enhanced "clones" are then hot-housed.
Next, repeat with variations, cloning and hot-housing your most promising lineages. Too slow? There is no need to wait until your enhanced clones reach adulthood. For an ultra-intelligent eight-year-old super-genius studying, say, M-theory or the Langlands Program - or exploring intelligence-amplification technologies - is already promising enough to warrant multiple clones with enhanced genetic variants of their own. And so on. Technically, (AI-augmented) biological superintelligence could be achieved on compressed timescales of a century or less via cloning with modifications because there will be no need to use slow-burning germline interventions...
OK, this particular thought-experiment might not sound sociologically realistic. But a billionaire / rogue state might have other ideas. And we can consider lots of other scenarios involving recursive AI-assisted biohacking too...
Of course if, and contrary to what I argue, full-spectrum superintelligence is really about the speed and serial depth of processing at which classical digital computers, then not even a hypothetical community of supergenius biological clones could compete with nonbiological super-AGI - or indeed with virtual super-EMS [whole-brain emulations] run on classical digital computers. But for all sorts of reasons, I still think a future of recursively self-improving biological robots is the most likely route to full-spectrum superintelligence.
[on drugs]
Thanks Gabriel. Antimuscarinic drugs can both elevate mood and impair verbal fluency: a cruel dilemma. It's hard to believe that a drug like scopolamine can seriously be touted in some psychiatric circles as a novel antidepressant. Amphetamines elevate mood and promote talkativeness, though not depth of thought, originality, or social cognition. IMO their use is best discouraged. Although I'm optimistic in the long run that superintelligence can be combined with rich hedonic tone, the route ahead is lined with pitfalls :
[on abolishing 'physical' pain]
The urgency or otherwise of this question is perhaps best considered when one has, say, a migraine rather than sitting comfortably
Should we eliminate the human ability to feel pain?
Ian, we must distinguish phenomenal pain from nociception. Nociception is functionally essential; phenomenal pain is potentially optional. Silicon (etc) robots can be programmed to avoid risky behaviours, and likewise connectionist systems can be trained up to avoid noxious stimuli, without undergoing the nasty "raw feels" underdone by organic robots like us.
In the long run, we can offload everything nasty onto smart prostheses. We certainly need a more civilised signalling system. But in the short-term, choosing "low pain" rather than "no pain" alleles for our prospective children via preimplantation genetic diagnosis strikes me as ethically responsible.
Sean, it's also possible that one day status quo bias will work in our favour. When we have phased out the biology of pain and suffering in favour of a more civilised signalling system, the idea of recreating the horrors of our Darwinian past - and then inflicting them on new unwilling victims - will seem crazy. I reckon tomorrow's bioconservatives will be right...
Ian, true, people with leprosy typically feel no sensory pain. But this doesn't mean people who enjoy pain-free lives are akin to lepers! As you'll recall from the article, I argue against the creation of uniform well-being, whether physical or psychological. Instead, I argue we should aim for information-signalling gradients of well-being that conserve the adaptive role of negative feedback. Information-signalling gradients of well-being are feasible even if future hedonic tone and future hedonic set-points surpass anything physiologically accessible today. More to the point, our quality of life can thereby be enriched too.
Perhaps the best parallel is congenital analgesia rather than leprosy. Many victims of leprosy suffer neuropathic pain. One cause of congenital analgesia is nonsense mutations of the SCN9A gene: other alleles confer unusually high or low pain thresholds. Either way, I'm not urging that we (yet) abolish physical pain. For now, choosing benign rather than high-pain alleles for our future children strikes me as both responsible and prudent.
* * *
But the Transhumanist Declaration expresses our commitment to the well-being of all sentience:
In the long run, we can aspire to do much more than alleviate suffering. For sure, delivering the well-being of all sentience is not simply a case of amplifying the volume and getting "blissed out" rather than blissful. I very much hope we can amplify and enrich our capacity for empathetic understanding of other sentient beings as well - and recursively self-edit our genetic source code and bootstrap our way to full-spectrum superintelligence! But the point is we can conserve the functional analogues of our nastier Darwinian feelings and emotions without today's nasty "raw feels".
If you're interested, I say a bit more about some of these issues e.g.
* * *
[...] Why we experience phenomenal pain at all is a mystery - at least within the context of orthodox scientific materialism. (cf. I've dodged this issue in the interview because so long as we understand the necessary and conditions for phenomenal pain to occur, we can prevent it. I wholeheartedly agree that our immediate priority should be control and mitigation. But in the long run, I know of no technical reason why organic robots can't enjoy profound physical and emotional well-being every day of their lives. Of course, the engineering challenge to make this happen is substantial.
Peter, yes, there is the world of difference between phasing out (physical and psychological) suffering and phasing out nociception / negative feedback mechanisms. In principle, hedonic tone and life itself could be orders of magnitude richer without sacrificing bodily function or critical insight. But let's start modestly...
Eray, I think the really interesting question is not whether philosophical zombies are possible, but understanding why they are impossible - granted a reductive physicalism that assumes the ontological unity of science. How can we rigorously derive the phenomenology of our minds from the underlying microphysics? Most scientists and philosophers still find Strawsonian physicalism repugnant - though IMO the alternatives are worse.
[As an aside, there is a sense in which the avatars populating one's world-simulation are indeed zombies. But when we're awake, these zombies causally co-vary with other sentient beings in the local environment.]
Replace my neurons with microchips? Yes, this would be an effective cure for sentience.
When does pain cross the threshold and become suffering? I guess the answer is in part conventional - which is not to say arbitrary. Ethically speaking, clearly we want to prioritise the alleviation and prevention of outright suffering. Does masochism invert the pleasure-pain axis? Intuitively, one might imagine so. But in the masochist, certain ritualised settings trigger the release of intensely rewarding endogenous opioids. A masochist hates getting his hand caught in the door just like you or me.
Other things being equal, I think profound emotional and physical well-being is preferable to complicated "mixed" states. But in the post-genomic era, we can agree this choice should be left to the individual. Today, hundreds of millions of pain-ridden and/or depressive people have no choice at all.
The issue of children complicates the issue of consent. Shortly we'll effectively be able choose the default level of pain-(in)sensitivity and (un)happiness of our future children. What genetic dial-settings will it be ethical to choose? I hope - and tentatively predict - there will soon be selection pressure in favour of a less pain-ridden and less misery-racked world.
* * *
Wisely, the Transhumanist Declaration doesn't commit us any specific ethical theory. One can support the well-being of all sentience and be a utilitarian, deontologist, virtue theorist, etc. Any ethical theory that attempts to reduce all ethics to subjective experience is controversial. But what we can say, I think, is that in a world without subjective experience would be a world in which nothing mattered at all.
History records stories of people who have successfully overcome terrible suffering. Other folk, alas, simply have their spirit crushed. (Not everyone has your resilience and strength of character Ian!) Either way, I don't think we're ethically entitled to create children genetically prone to misery and malaise simply to give them the opportunity to overcome it. A default state of invincible physical and emotional health will be a more ethical option. Or so I'd argue at any rate...
* * *
Evolution has primed us to think so Eray. Alas cheating the hedonic treadmill via "natural", nonbiological means won't be easy. This is both a blessing and a curse. Recall how of "locked-in" patients who can communicate by blinking, forty-eight percent of long-term survivors report their mood as good.
By contrast, many depressives would be miserable in the Garden of Eden...
* * *
Ian, like everyone here I am opposed to coercive happiness! This strikes me as a sociologically unlikely prospect. The biology of involuntary suffering, alas, is all too real.
Hedonic recalibration is not a social panacea. But other things being equal, someone with a higher hedonic set point, who enjoys a richer capacity for rewarding experience, will enjoy a better quality life than someone with a high genetic loading for depression.
Are pain and pleasure wholly or largely relative, as we might naively suppose? Tragically, millions of people in the world today endure chronic pain and/or depression. Some of their days aren't as awful as others. But it would be cruel to suggest that their merely miserable days were somehow happy. Conversely, there are cases of extremely "hyperthymic" people whose lives are animated by gradients of well-being. Days they merely find rewarding (rather than brilliant) aren't spent below Sidgwick's "hedonic zero". By the same token, posthumans won't need to experience the biology of life below hedonic zero at all.
* * *
Eray, suffering embitters as least as often as it ennobles. No doubt some hyperthymic people can be obnoxious. Others are loveable and widely loved. Hyperthymia is not out-of-control mania. One well-known example of a loveable person with an extremely hyperthymic temperament (and I use him as a "case study" by express permission!) is transhumanist scholar Anders Sandberg.
Ian, above I gave equal weight to remedying our deficits in systematising intelligence and social cognition alike. It's no disrespect to people with high AQ scores to say they find one cognitive style easier than the other.
Anecdotally, many people with extremely high AQ scores also have extremely high pain thresholds, which may colour their response to this question. One of my friends has an AQ of 37. He either has pain asymbolia or an extraordinarily high pain threshold. People with extremely high pain thresholds tend to regard phenomenal pain as a mere signalling mechanism rather than a horror they would do almost anything to avoid. The point is not that we should seek to make everyone neurotypical. Rather it's to ensure that people - and I hope nonhuman animals! - who involuntarily undergo deeply unpleasant forms of physical and emotional distress today should no longer be forced to do so.
If you are comfortable with your existing biology, this is good news (seriously!). We just need to endure that all other sentient beings can feel the same.
* * *
Ian, rich and complex bodily feedback is possible relying only on gradients of pleasure, as lovers if not celibate philosophers can attest. Are you arguing there are some problems that organic robots can solve that are computationally intractable without an algorithmic role for the "raw feels" of pain? I don't rule this out; but their indispensability would be a momentous discovery in robotics and computer science. (cf. the Church–Turing thesis
Other things being equal, life is better subjectively with less physical and emotional pain. Innumerable lives today are blighted by depression and chronic pain syndromes. Other things being equal, life would be subjectively richer still if we were animated by information-sensitive gradients of bliss. The "other things being equal" caveat here is essential, for some of the reasons you discuss. Likewise, I very much hope that we can enrich our capacity for empathetic understanding of other first-person perspectives - beyond the Machiavellian intelligence that was adaptive on the African savannah. (cf. But the creation of safe and sustainable empathogens poses many challenges.
* * *
Ian, IMO a commitment to the well-being of all sentience cannot involve promoting sadism. Today, people don't choose what turns them on; consensual role-play is fine. But any predisposition to derive pleasure from hurting other sentient beings is surely not a personality trait we wish to encourage. I'd argue that the ideal level of sexual violence in the world should be zero.
Masochism? Well, in principle masochists can derive richer sexual pleasure than now without needing to submit themselves to pain and humiliation. With biological tweaking, nothing need be lost; and much can be gained.
A type-11 person in your example, who experiences only information-sensitive gradients of well-being, cannot imagine the subjective textures of experience below hedonic zero. But s/he knows, by analogy with his or her information-signalling dips in blissful well-being, that such states are less desirable than his or her sublime hedonic peaks. By analogy today, I cannot imagine what it's like to be tortured;. But I can remember, sort of, my last toothache; and by analogy with that toothache I realise that torture must be hugely worse. I would not wish torture on anybody. In short, "Light without darkness is darkness" is fine poetry but poor science. And pleasure without pain is not numbness but simply pleasure. Posthumans, I predict, will enjoy an everyday intensity of experience orders of magnitude richer than the trancelike-existence of their sleepwalking ancestors. Perhaps we're going to "wake up".
* * *
Ian, we fundamentally disagree only if you believe our existing biology of pain and suffering should be compulsory. I think the biology of pain and suffering should be optional. So (I hope!) do you. If so, then we have consensus on the key issue. On to your more specific points...
Hedonic dips? Surely pain that isn't actively unpleasant isn't pain?
Development and survival? Are you ruling out recursively self-improving nonbiological robots that are endowed with nociception but lack the nasty "raw feels" of pain and suffering? For sure, we wouldn't want computationally to offload the good things in life onto smart prostheses. But what about the nasty stuff?
"Light without darkness is blindness"? Recall I'm arguing against uniform bliss, whether physical or psychological. Today, people endowed with high hedonic set-points and high pain-thresholds tend to enjoy a richer quality of life than depressives and the frequently pain-ridden. But people at both ends of the hedonic scale can behave intelligently and adaptively. Critically, information-signalling depends not on one's absolute position on the pleasure-pain axis, but rather on differences in hedonic (or dolorous) tone. Tragically, some depressive and pain-ridden people today rely entirely on information-signalling properties of ill-being for much of their lives. Life animated by information-signalling gradients of well-being is surely preferable.
Reduced diversity? On the contrary: it's depressives who typically tend to get "stuck in a rut". By contrast, temperamentally happy people tend to be more motivated and more sensitive to a broader range of rewarding stimuli. Thus (other things being equal) global mood-enrichment will make getting "stuck in a rut" less likely, both for individuals and for civilisation as a whole.
Eliminating kinky behaviour? Heaven forbid! Recall I was arguing against the promotion of sadism, i.e. sexual gratification derived from hurting others. If someone gets turned on by the thought of beating children, for example, this isn't to say they should be blamed for such fantasies. People today don't choose their psychosexual make-up. But we can both agree that such people should be strenuously discouraged from acting out their sadistic fantasies - and it would be in everyone's interest, not least the people in question, if they weren't prey to such fantasies in the first instance. [Socially unacceptable behaviour acted out in immersive VR raises difficult issues I won't enter into here.]
Global wireheading? Recall I've argued at length against such scenarios elsewhere (e.g. Hypermotivation)
* * *
Julian, yes, for masochists, "pain, in the right circumstances can be a lot of fun". But that's because the composite state in question is actually exquisitely pleasurable. Endogenous opioids released is certain ritualised contexts can be rewarding. Sadly, millions of people whose lives are blighted by physical pain today find the "fun" of pain elusive.
The intelligibility of literature, art, etc? Well, one might say that truly to understand e.g. anti-Semitic literature, one needs to be a little bit prejudiced against Jews. In one sense, at least, this may be the case; but if so, it is a form of knowledge one should be happy to forgo. Likewise, perhaps one would understand, e.g. Boethius' "Consolation of Philosophy" better if one were physically tortured like its author. The question is whether the gain in understanding would make the torment worthwhile. For my part, I would love to understand the aesthetic experiences of our superhappy posthuman successors instead.
Ian, is it possible you're confusing sentience with sapience? ("What we feel emotionally is distinct from our sentience.") Emotions are one manifestation of sentience.
Depression (as distinct from e.g. pain and fear) seems peculiar to social animals. Evolutionary psychologists tend to view low mood as an adaptation to group living in a predator-rich environment like the African savannah.
(cf. Rank theory ) In future, there will be no need to replicate its horrors - and no need to sacrifice critical insight either. For we could preserve the functional analogues of discontent even in a civilisation whose members are animated by gradients of intelligent bliss. Just consider the happiest and most productive hyperthymics alive today.
Any drug, gene therapy or technology may potentially be abused. Should technologies to relieve and prevent suffering be withheld on the grounds that they might one day be used by professional killers? I guess exactly the same argument could in theory be made against licensing, say, psychostimulants and anti-anxiety agents, and even against enrichment of our native oxytocin function:
In short, phasing out the biology of pain and suffering presents many potential problems. This is one reason to encourage intelligent and informed debate. How can new technologies of reproductive medicine such as preimplantation genetic diagnosis be used most responsibly? What is the optimal range of pain-sensitivity and hedonic tone?
In the post-genomic era, should some children's default genetic-settings include a predisposition to live depressed and pain-ridden lives? What are the ethically permissible limits to the control of parents over the minds and bodies of "their" offspring? (cf. life-saving blood transfusions for the sick children of Jehovah's Witnesses today). Who should decide those limits? I've always argued that the greatest obstacle to the well-being of all sentience won't be technical but rather ethical / ideological. And status quo bias.
If one has children via genetic roulette, i.e. traditional sexual reproduction, then one puts them at risk of all manner of deeply unpleasant states. Consider the hundreds of millions of people in the world today who suffer from chronic depression and / or chronic pain syndromes. So there isn't a risk-free option. Instead, we need to weigh risk-reward ratios. Thus I think there is a powerful case for making available preimplantation genetic diagnosis to all prospective parents who want it for their future children even now, the mere dawn of the era of genomic medicine.
[on involuntary suffering]
Each year around one million people in the world take their own lives. Over ten times that number unsuccessfully attempt to commit suicide. Far more people still commit acts of serious self-harm. Hundreds of millions of people are clinically or subclinically depressed.
And yet...
A recent French study of long-term patients with "locked in syndrome", i.e. people who suffer a catastrophic trauma and can subsequently communicate only by blinking, found that 72% of patients reported themselves as being "happy"
This percentage compares favourably with healthy "normals".
And an Ipsos poll in the Economist recently compared international levels of self-reported well-being of "normal" people. The differing national percentages of people who described themselves as "very happy" isn't what one would expect. Indonesia came first. India came second. Mexico came third.
What's going on?
We know some events send our spirits soaring.
Other events plunge us into despair.
So why is any long-term correlation between "objective" grounds for (un)happiness and actual (un)happiness seemingly so weak?
The answer, it seems, is the hedonic treadmill - an evolutionarily ancient set of negative feedback mechanisms in the brain. Each of us has an approximate hedonic set-point. Twin studies show that genes play an important role in whether your hedonic set-point is high, low or somewhere in-between. Likewise, genes help explain why some people fluctuate wildly between an unusually high hedonic ceiling and an unusually low hedonic floor, whereas other people are more equable.
So here's a thought-experiment. Imagine if a magic genie makes you an offer - for both you and your future children. You can choose your own hedonic floor, your hedonic ceiling and your average hedonic set-point. Plus 10 is superhappiness. 0 is hedonically neural, i.e. neither pleasant nor unpleasant. Minus 10 is indescribable misery.
What hedonic range would you choose?
Would you choose to suffer at all, i.e. how low would you set your hedonic floor?
And what would you make your average hedonic set-point - where you spend most of your everyday existence?
Unless you choose Plus 10 for both your average hedonic set-point and ultimate hedonic ceiling, then your mood as now will still fluctuate up and down over the weeks and months as good and bad things happen to you - and depending on whether you succeed or fail in your life projects. In other words, you'll still retain critical insight. But assuming that you choose to occupy a hedonic range in the higher reaches of the scale, then your hedonically enhanced life is likely to be immensely subjectively richer than now - whether you live as a prince or a pauper.
Or would you decline the genie's offer?
Of course, today we don't have any choice about either our "natural" hedonic set-point or hedonic range. "Antidepressants" and mood stabilisers can sometimes help. Some unlucky depressives spend almost all their lives deep in negative gloom - they rarely even reach hedonic zero. At the opposite extreme, unusually temperamentally happy people spend almost all their whole lives in positive territory. Probably a majority of people have an average hedonic set-point somewhere near hedonic zero - perhaps plus 1 or 2 or minus 1 or minus 2 - but with a wider hedonic range to explore when things go well - or badly. And of course some people are temperamentally stable, whereas other people experience huge mood swings.
However, the genetic crapshoot is about to change.
Medical scientists already breed super-happy or super-depressed strains of laboratory mouse for "research" purposes. Thanks to e.g. human twin studies, we are starting to understand the link between different alleles [genetic variants] and normal mood - and why some people seem born invincibly optimistic, and others are chronically depressed Over the next few decades, we could in principle choose to "recalibrate " our emotional thermostat - using biotechnology to raise hedonic floor, hedonic ceiling, and average hedonic set-points world-wide.
In time, preimplantation genetic screening
will be succeeded by true "designer zygotes", allowing far more radical interventions. The biology of Heaven, for want of a better term.
Life on Earth could - potentially - be animated by gradients of intelligent bliss, or at the very least gradients of well-being - not uncontrollable "manic" euphoria, just a rich hedonic tone as a backdrop to "ordinary" life. Technically at least, we could phase out the nasty allelic combinations - crudely speaking, the "bad genes" - that predispose so many of us to depression, anxiety disorders and other unpleasant Darwinian states of mind that helped our ancestors spread more copies of their genes on the African savannah.
We'll also be able to choose personality variables - such as whether to be temperamentally empathetic or coldly analytical, highly motivated or calmly contemplative, secular-minded or super-spiritual and so forth.
And in the long run, we'll be free to choose whether we want to experience any experience below hedonic zero at all.
The biology of suffering will have become optional.
[on the Singularity weblog]
Singularity 1 on 1
David Pearce interviewed by Nikola Danaylov
[on transhumanism]
Superlongevity? Superintelligence? Superhappiness?
Seba, yes indeed. Perhaps one way to undercut status quo bias is to imagine mankind stumbles upon a Triple S civilisation. Then ask critics what characteristics they would urge its inhabitants to change. Should they bring back involuntary aging? The biology of suffering? Predation, parasitism and disease? Congenital feeble-mindedness? Even to discuss such notions can sound absurd...
Coercion? Here come the Pleasure Gestapo? The suspicion that someone, somewhere, is going to try and force you to be happy is surprisingly common. But the historical record suggests that the infliction of involuntary pleasure rather than pain is vanishingly rare...
* * *
I fear today's notions of "super"-intelligence have about as much cognitive content as an idea from Mary Poppins:
Alas many of the best-known strands of transhumanism resemble semi-independent solar systems whose ideas rarely cross-fertilise - if you'll pardon my mixed metaphors.
* * *
"God's in his Heaven / All's right with the world!" said poet Robert Browning. Sometimes, I get the impression that even secularists believe tampering with the wisdom of Nature would be hubris - like defying divine Providence. However, IMO the last half billion years on Earth have been a bloodstained horror story.
* * *
Some folk think quantum minds are impossible. I'm sceptical classical minds are coherent:
("Biology takes a quantum leap")
Together with David Wallace's "The Emergent Multiverse", Jim Holt's less technically demanding "Why Does The World Exist?" are my candidates for best books of the year:
("What Can You Really Know? by Freeman Dyson")
We still have a long way to go:
("Five Top Reasons Transhumanism Can Eliminate Suffering : Futurology")
* * *
David Brin is a wonderful writer. Even if one suspects that primordial life-supporting Hubble volumes where life originates more than once are rare, one should bear in mind the possibility of error. And indeed if post-Everett quantum mechanics is correct, alien civilisations must presumably exist parallel [or rather orthogonal] to our own, though their interference effects will be negligible.
Radical transparency or windowless monads? One can see the both the utopian and dystopian potential of radical transparency in Brin's sense. And an extension of the concept to all sentient beings is probably a precondition for the well-being of all sentience - though there's no guarantee of such a benign outcome
* * *
A touchstone of intelligence is the capacity to distinguish between the important and the trivial. Hedonic calculus quantifies this capacity. Of course there's a risk of special pleading here. A criterion of intelligence that flatters one's own and ranks one's opponents as stupid is going to be suspect. But intelligence with no conception of what's (un)important seems unworthy of the name.
* * *
I'd just scream in despair...
* * *
Sean yes, it's a dilemma. For reasons that are poorly understood, there is often a trade-off [in humans] between a systematising cognitive style and an empathetic cognitive style. Hyper-systematisers typically aren't very empathetic. But unless we can persuade warm, empathetic people that their compassion must be systematised, their kindness is often dissipated [Compare my empathetic friend who spends much of her life both caring for cats and rescuing the traumatised mice they've mauled.] Bill Gates was a ruthless entrepreneur; but his unsentimental cost-benefit approach to Third World Development /vaccinations means he can do more good than the kind-hearted soul who bequeaths her fortune to her cat.
* * *
Alas for adverse side-effects
("PLOS ONE: Testosterone Administration Reduces Lying in Men")
* * *
Intuitively yes Jonatas. Insofar as empathy is just an emotion, it can undoubtedly lead to bias rather than impartial ethical rule -following. But the more sophisticated forms of empathetic understanding (cf. higher-order intentionality) are hugely cognitively demanding. The superior perspective-taking capacity of early humans - together with generative syntax - seems to have promoted recursively improving co-operative problem solving, and in doing so, driven the evolution of distinctively human intelligence. So it's odd to think that superintelligence might have a more stunted capacity for empathetic understanding than archaic humans.
In what sense can one be super-intelligent if one is super-ignorant, as are e.g. insentient digital computers? First-person facts don't have second-rate ontological status. To understand the perspectives of other unitary subjects of experience, you have to try and imagine what it's like to be someone else (what's it like to be of someone of a different gender, ethnic group, sexual orientation, culture, species, etc) Indeed. The opposite of the convergence hypothesis for (super)intelligence is the orthogonality thesis (cf. - though I'd argue that a cosmological understanding of the pleasure-pain axis would converge on God's utility function, so to speak. (Don't worry, I'm not a closet theist!)
Sean, if you'll forgive my pedantry, most sociopaths aren't actively sadistic. Yes, they typically know - in some weak, attenuated sense of "know" - that their victims suffer. They just don't care.
True, we might imagine an insentient super-AGI programmed with the utility function of a classical utilitarian that systematically converts the world into utilitronium without any inkling of why phenomenal pain and pleasure actually matter. Indeed, if programmed to do so, an insentient digital super-AGI would presumably convert the world into dolorium instead without any inkling why this outcome was ethically catastrophic. Yet if we actually want to understand the world - not just its formal properties, but the intrinsic subjective properties of matter and energy and their comparative (un)importance - then IMO we'll need full-spectrum intelligence, not digital zombies.
Does a meat-based diet harm both killers and victims?
("Why Do Vegetarians Live Longer")
Is NonFriendly AI a form of Superintelligence - or a hypothetical virulent kind of malware?
(Alexander Kruel · "How intelligence probably implies benevolence")
* * *
How do we overcome the trade-off?
("Empathy represses analytic thought, and vice versa")
* * *
"But suffering is needed to produce great art and literature." (?) ("Man or machine - can robots really write novels?") At the same time, reason cannot understand emotions without experiencing them - one reason to expect full-spectrum superintelligence to be benevolent.
* * *
I was surprised to learn Ashkenazi “visuo-spatial” IQ scores were comparatively low: IMO this finding should be replicated. ("Why is the IQ of Ashkenazi Jews so High? - 20 Possible Explanations")
* * *
("'The Self' in the Future: Will it be Extinguished, by Neuroscience?")
* * *
Jean, I think "physical" and emotional pain are intimately connected:
("Is Rejection Painful? Actually, It Is")
We need to eradicate both sorts of nastiness in favour of mere information-signalling dips in well-being - hopefully shallow dips and hopefully sublime well-being.
What might go wrong?
"Gaiety is the most outstanding feature of the Soviet Union" - Joseph Stalin.
* * *
All humans need gene therapy IMO...
Shortly we'll all be able to self-edit our own genetic source code and modify the archaic malware called "human nature.
Etienne, I hope user-friendly gene authoring tools and editing packages will mean most of us won't need to master base pair sequences any more than today we need to program in machine code. Open source? Well, perhaps. But whom would you trust to optimise your source code?!
I fear somatic gene therapy needs to be repeated over time on account of cell turnover. In the long run, however, I hope we can use germline therapy to pass on a heritable predisposition to invincible happiness, super-genius intellect and eternal youth.
* * *
Open source? Etienne, I confess I'd rather have my code designed by Apple. Alas my existing code owes a greater debt to Hieronymus Bosch. I'd probably be more engaged with the Open Source movement were it not for the fact that I'd struggle to program a toaster.
Yes, I very much agree with you: genetic engineering will be our main stepping stone to transhumanism. This is one theme I hope to discuss in Lund next week. Other transhumanists argue that biological humans - and even biological transhumans - are destined for the dustbin of history.
I very much hope you'll live to see the applications within your lifetime too. What a terrible tragedy if youthful dietary indiscretions led you to miss Aubrey de Grey's Methuselarity by just a few years!
("Want to Live Longer? Eat Vegan!")
Dustin, the Encode website is excellent. Thanks. I wonder about true Open Source genetics. In future, will there be a genetic counterpart to the Drug Enforcement Agency - the GEA? - to police our genomes against rogue biohacking?
Some of us could benefit from rewriting from scratch:
(How Science Can Build a Better You")
Dustin, I agree with you. There are huge pitfalls to Open Source genetics. Freeman Dyson below is too relaxed about the risks, I think. Despite my libertarian, pro-Open Source instincts, we do need (democratically accountable) regulation of genetic engineering. Recall though how today each act of sexual reproduction is a unique genetic experiment with unknown consequences. Adding a modicum of intelligent planning (i.e. choosing low-pain, pro-social, pro-happiness alleles for our prospective children) is more responsible, I think, than putting our faith in the wisdom of God or Mother Nature.
"Our Biotech Future" by Freeman Dyson
We now have a much better understanding of why consciousness is impossible:
("CultureLab: Will we ever understand how our brains work?") The Intelligence Implosion?
No, I'm not convinced either.
("Controversial study suggests human intelligence peaked several thousand years ago")
* * *
But what matters is suffering not sapience...
("When does an animal count as a person?")
* * *
If I were a lawyer, I might try and plead that US drug laws are unconstitutional on the grounds they deny citizens their inalienable right to the pursuit of happiness. A counterargument is that we're often incompetent to do so and most existing drugs are lousy. But the Declaration of Independence proclaimed our right to the pursuit of happiness, not happiness itself.
Persuading anyone to give up what they believe in favour of what you believe is often an impossible challenge. However, if you can show them that what you believe is a logical implication of what they believe, then you're in with a hope. Perhaps.
* * *
The Sun Says...
("'Terminator centre' to open at Cambridge University")
* * *
Three future scenarios for intelligence:
("Humans and Intelligent Machines - Co-Evolution, Fusion or Replacement?")
If a magic genie gave transhumanists everything we asked for (eternal youth, unlimited material abundance, superintelligence, etc) with the exception of reward pathway enhancements, the six months' hence the hedonic treadmill would ensure we'd probably not be significantly (un)happier than we are now. Thankfully, we don't need to rely on magic genies.
* * *
Jean, I very much agree that humans need to develop our stunted emotional intelligence. I think an equally powerful case can be made that we must develop our capacity for impartial ethical role-following. Compare the outrage that's greeted the front-page news that beef burgers are "contaminated" with horse meat - as though eating cows were somehow less objectionable than eating horses.
("UK’s Tesco struggles to survive ‘horse meat’ scandal")
Would the greater longevity and higher IQ scores recorded by vegetarians compared to meat eaters be higher still if they consumed fish? It's possible; but since the advent of total parenteral nutrition [for people who've lost function of their small intestine who previously just died] we know there is no missing secret ingredient that non-meateaters lack [though of course strict vegans must take supplemental B12]. Today, fish tend not to be euthanased but die horribly. Utopian technology (and I hope ethics to match) should in future prevent such a ghastly fate.
Oxford University Press: "Do Fish Feel Pain?" by Victoria Braithwaite.
Jean, I think the 8+ point IQ gap here in the UK between vegetarians and meat-eaters unduly flatters meat eaters. This is because today's so-called IQ tests are "mind blind". Any test of general intelligence with ecological validity would give due weight to scoring the mind-reading prowess that drove the evolution of distinctively human intelligence. Such a test would almost certainly show the disparity in intelligence between meat-eating dullards and vegetarians is higher still. Of course, correlation doesn't prove causality. Not least, intelligent children are more likely to go vegetarian in the first instance.
Empathy as a mere personality variable? I'd beg to differ. A theory of mind is extraordinarily cognitively demanding. Not all humans master the perspective-taking skills entailed; and when they do so partially, the outcome isn't always pretty:
But it's true a richer cognitive capacity for mind-reading promotes a greater empathetic understanding - and increasingly, lifestyle changes to match.
* * *
All we need is love?
Alas we need better genes and better drugs IMO:
(with thanks to Hank)
Utopian Pharmacology - Mental Health in the Third Millennium / MDMA and Beyond>
* * *
What unites rather than divides transhumanists?
(with thanks to Hank Pellissier)
* * *
("David Pearce - Prophetic Narratives / Humanity+ @San Francisco 2012 - Videos -")
Vesna, I sympathize. Most humans currently satisfy the the diagnostic criteria of sociopathy as laid out in the Psychiatrist's Bible, The Diagnostic and Statistical Manual of Mental Disorders (DSM). Of course the framers of DSM "obviously" intended to exclude human treatment of sentient beings from other species - in the same way that our forefathers "obviously" intended to exclude women and blacks (etc) from their formulations of human rights. However, the solution is going to be education. Most of us are merely quasi-sociopaths - and potentially educable.
* * *
[on transhumanism in Australia]
Australia: ground zero to an imminent Intelligence Explosion...?
Science and the Future
Or a fantastic chance to meet the cast of Neighbours?
Customs and immigration weren't quite as laid back as I expected. The officer wanted to know what was a "transhumanist". He got a two-minute spiel - after which I was waved though as though I'd just descended from Planet Zog. Melbourne is a kind of Uncanny Valley of British civilisation - all the more disconcerting because the natives speak a recognisable approximation of English. Hordes of excited teenage girls were camped outside the hotel this morning. Sadly, it transpired they were not waiting to hear my solution to the binding problem, but hoping to catch sight of rumoured guest Justin Bieber. I assured them I too was a Belieber.
"No matter how much I try, I can't figure out how to not be adorable!” (Justin Bieber) Possibly Mr Bieber underestimates his abilities; but the world needs more self-love and less self-hatred.
* * *
“Sanity and happiness are an impossible combination.” (Mark Twain) I confess I sometimes pine for psychosis.
David Zuccaro, yes, all the more reason to hardware the default settings of tomorrow's hedonic floor above today's hedonic ceiling. Lots can go wrong in posthuman paradise. But let's ensure the worst catastrophes aren't as bad as today's peak experiences. ?
David , perhaps recall I don't advocate or predict a biology of constant bliss - even though IMO we can and should engineer a ecosystems of life-long well-being whose hedonic floor surpasses today's hedonic ceiling. My main worry about fictional characters like Sonmi~451 is how they come to symbolise what an innately blissful future amounts to - and reinforce today's bioconservative status quo.
Violence? IMO it's not merely that gene therapy will make us more moral. The advent of naturalised telepathy via technology may induce a shift in the nature of decision-theoretic rationality. If I can feel your preferences and desires as my own, then wantonly harming you will seem, not just immoral, but stupid.
(cf. "Will we ever communicate telepathically?" =)
* * *
David, the (supposedly!) discrete classical neurons of the central nervous system form a hive mind - at least when we're not in a dreamless sleep. But beware cheap imitations. An ant colony - or even a hyperconnected population of skull-bound brains - is not a true hive mind IMO [Perhaps see the discussion on theories of mind and the binding problem triggered by Andres' questionnaire in the Hedonistic Imperative FB group.]
[...] None of us enjoy an excess of rules and regulations. But perhaps consider economic sectors where they are lacking. How lax would you want, say, food or airline safety to be? What were the socio-economic effects world-wide of two decades of cumulative financial deregulation in 2007-2008?
Absolute upper and lower bounds to pleasure and pain? David, sure, with our existing biology. Experimentally, we can investigate how hard an organism will work to obtain or avoid different rewarding or noxious stimuli to an upper limit. These "operationalised" measures of (un)happiness agree well with the genetic, neurobiological and pharmacological evidence, e.g. investigation of how rewarding is selective activation of mu opioid receptors in our twin hedonic hotspots by full, partial and inverse agonists, etc). But nothing stops us from genetically tweaking, adding extra copies of, and "over-expressing" (or under-expressing) copies of the relevant genes - and physically scaling up the size of our reward centres beyond today's puny dimensions.
The alleged computational-functional necessity of the "raw feels" pain?
Do you believe–Turing_thesis
is false? Also, perhaps see:
Question: if you had to choose, would you rather be a rich young successful unipolar depressive - or a quadriplegic victim of "locked in" syndrome who blinks to communicate he is "happy"?
David, we each instantiate a world-simulation with a genetically constrained hedonic range and hedonic set-point. It's not "subjective idealism" to say that the quality of life of a dirt-poor hyperthymic peasant exceeds that of a depressive prince. Feel free to recast my question. If you now had to choose between having an ostensibly successful but depressive future and becoming a happy locked-in syndrome patient, which option would you pick?
[on the limits of computability]
"You insist that there is something that a machine can't do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that."
(John von Neumann)
But what kind of machine?
Psychedelics and Sentience
It's great that Stanford transhumanists are offering cruelty-free all-vegan cuisine. (Peter Thiel is also leading the way with in vitro meat development: "Billionaire Peter Thiel donates to 3D meat bioprinting lab": Where Stanford leads, let's hope the rest of the world will follow. Despite the delightfully evocative graphic (thank you Stanford Transhumanist Association!) the talk will be delivered in an approximation of ordinary waking consciousness - a claim critics might contest.
* * *
Teemu, central to my conceptual framework is reductive physicalism - i.e. no "element of reality" can be missing from the quantum field-theoretic formalism of physics - and the daunting challenge of the phenomenal binding problem: unitary consciousness seems classically forbidden. How can Levine's "explanatory gap" be closed? If it can't, than farewell to the unity of science...
[on humans, transhumans and posthumans]
Humanity 2.0?
Or was Henry Ford right?
Humanity 2.0
My only previous visit to Oslo was to the world's first in vitro meat symposium:
I trust that in future all meat will be lab-grown.
Let's hope Beyond Meat goes global, Brian. "Technical solutions to ethical problems" may not be the most morally heroic transhumanist slogan. But it's probably more effective than talk of mass murder and a cannibal holocaust. Whether our invitrotarian grandchildren will be so forgiving, I don't know. [I discuss quitting meat and the invitrotarian revolution I was caught off-guard by some of the questions. It's always struck me as axiomatic that transhumanists should help sentient beings rather than harm them. The controversial stuff (my lights at least) on quantum mind etc comes towards the end.
Andres, young Norwegians are civilised, urbane, ask intelligent questions in flawless English, and have impeccable manners. So the event went rather well. Moderator Ole Martin is a hedonist after your own heart. And my nominal opponent, Sean Hays, is a thoughtful commentator rather than a polemicist. Perhaps the only topic we diverged on completely was cryonics. [Unlike Stanford, not everyone was bursting with entrepreneurial enthusiasm to launch their own start-up; but then Norway has a lot of oil.]
"Thou shalt not recalibrate the hedonic treadmill" is a prohibition unknown to the Bible. The miracle of the loaves and the fishes foreshadows in vitro meat. The [genetically tweaked] lion lying down with the lamb can be interpreted as the fulfilment of Biblical prophecy. And if mere mortals can envisage the well-being of all sentience, then an All-Merciful, All-Compassionate God can scarcely be more stunted in the range and depth of His benevolence. OK, perhaps "The devil can cite Scripture for his purpose." But that quote is Shakespeare, not holy scripture.
[on posthuman superintelligence]
The launch of the Springer "Singularity Hypotheses" volume in London on Saturday. Crudely speaking, will posthuman superintelligence be 1) our eugenically redesigned biological descendants 2) a Kurzweilian fusion of humans and machine intelligence 3) a nonbiological singleton AGI as prophesied by MIRI (formerly the Singularity Institute 4) None of the above?
Technological Singularities
Some general background to the debate:
MIRI (Machine Research Intelligence Research Institute:
Ray Kurzweil:
Eugenics plus "narrow" AI:
Mike, radical life-extension is indeed one of the goals of the transhumanist movement as a whole. But (runs the argument) superintelligence can deliver both radical life-extension and effectively unlimited material abundance for all. And - I hope - the well-being of all sentience in our forward light-cone. What could go wrong? Lots, I fear...
Ian, this is controversial to say the least. Optimistically, it may be argued that superintelligence entails a superhuman capacity for perspective-taking and empathetic understanding - a radical extension of the (fitfully) "Expanding Circle" of compassion chronicled by Peter Singer and Steven Pinker in "The Better Angels of Our Nature":
But many Singularitarians are meta-ethical anti-realists who would broadly agree with the "Orthogonality Thesis" as argued by e.g. Nick Bostrom:
Thus MIRI, most notably Eliezer Yudkowsky, argue that Non-(Human) Friendly AGI is the most likely outcome of an Intelligence Explosion.
Whether this outcome would be good or bad is itself controversial.
Is empathy a personality variable or a cognitive achievement? Or both? Even if we acknowledge that mind-reading prowess is a precondition of full-spectrum intelligence, sceptics would argue that empathy is no guarantee of friendliness - as the evolution of "Machiavellian Intelligence" in higher primates shows all too well. In reply, it could be argued that the sinister side of empathy reflects genetically adaptive distortions in our perspective-taking capacity - a cognitive bias that full-spectrum superintelligence would necessarily transcend.
("Humans And Monkeys Share Machiavellian Intelligence")
Dave, yes, this is this first century in evolutionary history where intelligent agency really could end the world - or at least life on Earth. Even so, sterilising the planet would pose a formidable technical challenge. And you probably shouldn't trust a negative utilitarian to do existential risk appraisals.
Ian, Indeed so. I could sing you a happy song of an imminent posthuman era of paradise-engineering. But Brian Tomasik, for example, believes that the real horror story of life has barely begun:
* * *
Time to move the goalposts?
("Human enhancement ethics. Is it cheating?")
Video of the launch:
video production and editing by Adam Summerfield
I was mildly disconcerted to learn from event organiser the admirable David Wood that one person had cancelled attendance and demanded their money back on learning I'd be speaking - on the grounds they "didn't wanted to be exposed to a lot of vegetarian propaganda".
* * *
Naively, one might imagine that the fate of cognitively humble beings in the face of vastly superior intelligence is at the heart of any debate about the risks and opportunities of posthuman superintelligence.
(Rest assured that there is only a brief clip of me foaming at the mouth.)
My co-speaker and lead editor of the volume is Amnon Eden.
I argue that a classical digital computer could not grasp what it is to be important - although we could certainly program it with a utility function to behave in ways that are systematically interpretable by sentient beings as showing it values some states as important.
It's hard to overstate the gulf between the optimistic Kurzweilian "fusion" scenario and the darker MIRI vision - where humans and all our values will most likely be superseded by a non-friendly singleton super-AGI.
Also, although I contrast biologically-based conceptions of posthuman superintelligence with the Kurzweilian approach, this division is simplistic. Some measure of "cyborgisation" is presumably inevitable. The big question is whether you think that posthuman superintelligence will most likely retain its biological, neuronal core. [I didn't have time discuss nonbiological quantum computing. But IMO the possibility of life based on such an architecture this century is remote.]
* * *
The Singularity will be televised?
("The Singularity Film – a documentary by Doug Wolens")
Volume 2 of the Springer Singularities series. How many more volumes before human extinction - or apotheosis?
* * *
But boosters of nonbiological superintelligence will claim poor old organic robots can't compete.
* * *
Thanks Vito. In "Zendegi", Greg Egan explores Singularitarian themes; but some of the discussion he provoked may be opaque to outsiders who don't know where the bodies are buried...
Danny, yes, I fear we're sleepwalking to the nuclear abyss. Our greatest cognitive achievement as a species would be expanding our circle of empathetic concern to embrace the well-being of all sentience in our forward light-cone. Maybe this will eventually come to pass. But I'll be amazed if there isn't nuclear war this century - whether local, theatre or a full-blown strategic interchange between the superpowers I don't know.
But life will go on...
Depending on whether you conceive yourself as a type or a token, inflationary cosmology suggests we may be effectively immortal. A curse or a blessing?
Stefano, you can tell a depressive or pain-ridden person there is more to life than abolishing suffering - and you'd be right. But physical and psychological health is the bedrock of flourishing lives. I hope invincible well-being can be the taken-for-granted backdrop to posthuman life - not so much the goal as the precondition.
Rui, in a sense you're right. Only members of an intelligent species could have organised the Holocaust, the Gulag - or factory farming. But I think the only solution to the world's ills is more intelligence, not yes. Only one species has the technical capacity to phase out the biology of suffering in our forward light-cone. If humans were to disappear, the horrors of "Nature, red in tooth and claw" would continue indefinitely.
Rui, true, futurology has a dismal track record. Few if any of us have deep insight into the ramifications of what we're up to. But post-Cambrian life on Earth has been a 540 million year horror-story. For the first time in history, intelligence beings have the capacity to rewrite their own genetic source code and bootstrap our way to superintelligent civilisation. The bedrock of true civilisation, I believe, is an absence of involuntary experience below hedonic zero.
* * *
Just as there are a finite number of "perfect" games of chess, there may hypothetically be a finite state-space of "perfect" cognitive and affective states from which no full-spectrum superintelligence would ever depart. But nothing in the abolitionist project obliges anyone to sign up for my fanciful eschatological musings. Yes, I agree with you Stefano, the feedback role of pain and suffering has been critical to the lives of sentient organic minds. Discontent has been the motor of progress. But the question I'd ask you is whether any of the raw feels of experience below hedonic zero are computationally indispensable - and hence irreplaceable either by non-sentient prosthesis or information sensitive-dips in well-being? Maybe you'd claim that the raw feels are indispensable. So intelligent life based on gradients of bliss is impossible. But proving this claim would be a profound result in computational science. (cf.
* * *
By "hedonic zero" I have in mind Sidgwick's sense of the term, i.e. the divide in the pleasure-pain axis marked by experience that is hedonically neutral - lacking in either positive or negative hedonic tone. Isn't this a natural watershed - and invariant over time? Either way, advocates of abolitionist bioethics do well to stress ad nauseam that we're not talking about coercive happiness. No one, to my knowledge, threatens to rob you Stefano of your right to stay "dissatisfied, restless, challenged". As the technology matures, the real question to ask is what if any coercive measures do you think should be taken against reformers who seek ensure sentient beings all have the freedom to enjoy life-long well-being? How much, and for how long, should anyone be forced to suffer? Enforced by what means? And by whom?
* * *
Stefano, our genes are already extraordinarily good at making us dissatisfied. But the kinds of things we are typically dissatisfied "about" - lack of wealth, status, sexual opportunities etc - tend to be things that helped our genes leave more copies of themselves in the ancestral environment, not the well-being of all sentience. By all means preserve the functional analogues of discontent minus its nasty "raw feels". For my part, contemplative bliss sounds more appealing. But unless we redesign our reward circuitry, then even if we create a utopian society with utopian technology to match, our subjective quality of life will not be significantly better than now.
Javier, yes. Awhile ago I culled a collection of quotes about soma from Huxley's "Brave New World". BNW may be a severely sub-optimal utopia. But it's paradise compared to the lives of most sentient beings alive today. Soma quotes
Marion, Kristin, I fear whoever is greediest for power will forever be most likely to get it. All the rest of us can do is attempt to infect tomorrow's movers-and-shakers with our ideas when they're still young and impotent. Or as Keynes put it, “Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back.”
Stefano, many thanks for the biopolitix link. I've now got (I hope!) a less simplistic idea where you're coming from. But is your position inimical to abolitionist bioethics - or simply orthogonal it? Yes, we have very different historical meta-narratives; but I would hope they can be reconciled.
(I don't if my work loses or gains in Italian, but here's a URL:
Jason, I can understand why you might be tempted by the blue pill - the allegedly blissful ignorance of illusion. But I reckon posthumans will regard Darwinian life itself as a form of depressive psychosis. Which option would you choose if presented with Felipe De Brigard's so-called Inverted Experience Machine Argument? Either way, radical hedonic recalibration promises the advantages of realism without the dreadful psychological costs.
Stefano, yes, indeed! Apologies if I inadvertently implied otherwise. Perhaps it's worth your stressing that your critical comments on Jønathan Lyons' essay on IEET are directed at one (or perhaps many?) of the different flavours of abolitionism rather than abolitionist bioethics per se.
* * *
[on abolishing suffering in Kurzweil’s Sixth Epoch Scenario]
Kurzweil’s Sixth Epoch
("Abolition is Imperative in Kurzweil’s Sixth Epoch Scenario")
Jason, skull-bound experience machines were "designed" by natural selection to leave more copies of our genes. Ethically speaking, IMO we should replace them with experience-machines designed to maximise the well-being of all sentience. In the long run, should we aim structurally to "mirror" the basement reality or instead create designer paradises in VR? I don't know - but reward pathway enhancements can make either option seem sublime.
Stefano, a commitment to phasing out the biology of involuntary suffering does not entail a commitment to utilitarian ethics, let alone some kind of eschatology. For sure, an abolitionist ethic does rule out the idea of “Back to the Cretaceous” and a Nietzschean world-view. But if you’re looking for nightmarish historical parallels, one twentieth century movement exalting Nietzsche’s work springs to mind. (“I do not point to the evil and pain of existence with the finger of reproach, but rather entertain the hope that life may one day become more evil and more full of suffering than it has ever been.” - Nietzsche was not a fascist, but his writings abound in such rhetoric.)
Phasing out involuntary suffering is consistent with increasing the diversity of life. Not least, genetic engineering potentially allows intelligent agents to cross gaps in the fitness landscape otherwise prohibited by natural selection.
Why assume that phasing out involuntary suffering entails a commitment to phasing out competition? By itself, radical elevation of our hedonic set-points allows us to be just as co-operative or as competitive as before. My personal preference would be for enhanced empathy and co-operative problem-solving; but this is a separate issue from abolitionist bioethics.
* * *
Stefano, Rick, there is a critical distinction between being blissful and “blissed out”. Yes, uniform well-being is inconsistent with critical insight and intellectual progress. But abolitionist bioethics isn’t about building a “perfect” world. Radical genetic recalibration of our hedonic set-points via biotechnology promises hugely to enrich our quality of life while (optionally) leaving our values and most of existing preference architectures intact. This prospect isn’t science fiction. Already we are beginning to decipher the alleles and allelic combinations implicated in possession of an unusually high (or low) hedonic set-point. Which variant of the COMT gene, for example, do you think we should choose for our prospective children?
Of course, there is a difference between reducing the burden of suffering in the world and the complete abolition of involuntary experience below hedonic zero in our forward light-cone. But if we can contemplate a 100 Year Plan to achieve interstellar travel (, then why not a 100 year plan to eradicate the molecular signature of negative hedonic tone? I’d hesitate to say which challenge is technically harder. But I know which is more morally urgent.
* * *
Rick, you’re surely right to draw attention to potential pitfalls. Life is messy. But nothing in the theory of practice of abolitionist bioethics entails harming other sentient beings in any way. Thus the use of immunocontraception to regulate fertility doesn’t entail literal physical castration. The mass use of sterilants doesn’t harm Anopheles mosquitoes - unless we believe a mosquito has reproductive rights. To be sure, critics may charge that abolitionists want to “exterminate” carnivores. This is just poetic license. A species is a taxonomic abstraction. Unless we’re species essentialists, a lion that eats in vitro meat does not thereby cease to be a lion - any more than members of Homo sapiens cease to be human if we start wearing clothes and adopt a cruelty-free vegan diet. And even if the civilising process does mean we are no longer “truly” human, does this transition ethically matter?
Pain? In the long run, I know of no technical reason why phenomenal pain can’t be abolished completely via the use of e.g. nonbiological smart prostheses to perform its current role in nociceptive signalling. But in the short-to-medium term, we may rob physical suffering of its moral urgency by using preimplantation genetic screening to choose benign “low pain” alleles of the SCN9A gene for our future children (cf. - and then extend this pre-selection process “down” the phylogenetic tree:
In short, high-tech Jainism - no violence at all.
* * *
SHaGGGz, abolitionist bioethics is wholly consistent with respect for the sanctity of life. The term “exterminate” is surely best reserved for acts of killing.
You ask “What is such a [eco]system ultimately for, anyway?” I’m sceptical such questions have a determinate answer. Either way, abolitionist ethics isn’t about answering teleological mysteries or solving the Meaning of Life. Rather we just want to secure the minimum biological preconditions necessary to allow all sentient beings - human and nonhuman - to flourish, most notably an absence of involuntary experiences with negative hedonic tone. In the case of large, free-living terrestrial vertebrates in our wildlife parks, recognisable extensions of existing technologies can potentially suffice. The plight of small rodents, let alone invertebrates, must await an era of mature nanotechnology next century and beyond. I’m not sure where “bask[ing] in our own glory” comes in. Humans are responsible for more suffering in the world today than perhaps all other species combined. We’re also the only species intellectually capable of rescuing suffering sentients from the abyss of Darwinian life. Whether we’ll rise to the challenge is another matter.
* * *
SHaGGGz, one purpose of canvassing such costly, complicated ad technically demanding interventions as
is precisely so no one need feel signing up for abolitionist bioethics entails saying farewell to “charismatic mega-fauna”.
You remark that “entire classes of organisms and their ways of life are to be extirpated”. One night say the same of the many subcultures of human predator. No, I certainly don’t think human predators, child abusers (etc) should be harmed. But ethically we recognise that protecting the young, the innocent and the vulnerable takes priority.
* * *
Giulio, intuitively, yes. But one existence proof that perpetual bliss combined with perpetual desire is feasible is intracranial self-stimulation (ICSS: “wireheading”) Wireheading shows no physiological tolerance. This is a world away from the genetically elevated hedonic set-points and the prospect of information-sensitive gradients of intelligent bliss we may anticipate animating our posthuman successors. But technically, wireheading would be a lot easier.
Giulio, alas not. The hedonic treadmill still grinds. We know from e.g. twin studies that hedonic set-points have a high degree of genetic loading. Hyperthymic people like our distinguished colleague Anders Sandberg (“I do have a ridiculously high hedonic set-point”) are rare:;_uri=/watch?v=YTu28qn2xcg&
But this is why there is such a compelling case for ensuring our future children can be super-Anders, i.e. blessed with a predisposition to information-sensitive gradients of well-being, rather than locked into permanent uniform bliss. Even today, choosing via preimplantation genetic screening a handful of benign alleles / allelic combinations that predispose to high hedonic set-points could potentially hugely enrich the lives of our offspring. Next-generation designer zygotes will allow much more ambitious enhancements. And next decade and beyond, I hope mature humans will gain mastery of our reward circuitry and recalibrate our hedonic set-points (and motivations, anxiety thresholds and empathetic understanding) too….
* * *
CygnusX1, thanks for the thoughtful comments - and the hotlink. Apologies btw for the seemingly one-dimensional focus on the biological roots of human ills and disregard of the social and political context. But the viciously efficient negative feedback mechanisms of our hedonic treadmill mean that even if everything IEET readers dream of for the future were to come true - a utopian society and utopian technology to match - our level of subjective (un)happiness would be unlikely to change significantly in the absence of direct reward pathway enhancements. This prediction violates our intuitions. It’s also empirically well supported.
Rick, one needn’t be a utilitarian of any kind to endorse abolitionist bioethics. But you raise a difficult question about death and mourning. If death or misfortune befalls a loved one, then surely we should want to grieve. Not to do so would cheapen our relationships.
An (inadequate) response to this objection is simply to argue that we should use medical technology to overcome ageing and other ills of the flesh. Aubrey de Grey’s grounding-breaking “Ending Aging” is the inspirational text here. No law of Nature condemns organic robots to grow old and die. The problem, I think, is that barring truly revolutionary breakthroughs in medical science, the biology of ageing and death are likely to persist well into next century and perhaps beyond. Maximum human lifespan is still not increasing. Technically at least, mood-enhancement, hedonic set-point elevation and even life based on gradients of information-sensitive bliss seem potentially easier to engineer than genetically preprogrammed eternal youth. So what should we do in the transitional era - when we can regulate subjective well-being but not the ravages of ageing?
I’d argue that one is entitled to want one’s death or misfortune to diminish the well-being of friends and loved ones. But one isn’t entitled to want them involuntarily to suffer on one’s own account. If one does want friends and loved ones ever to suffer, then in what sense are one’s relationships based on true friendship, rather than egotistical self-regard? Vainly perhaps, I’d want my death or misfortune to trigger a steep but reversible decline in the well-being of friends and family. But I wouldn’t want - and I don’t think I’m ethically entitled to want - this decline to pass below hedonic zero.
Either way, recall abolitionist ethics is not about coercive happiness. Rather it’s about giving everyone - human and non-human - mastery of their emotions. No one should be compelled to endure the biology of involuntary suffering as they do today.
* * *
Rick, first, many thanks: I thought I was broadly familiar with the different categories of objection to the abolitionist project. But you’ve raised a worry I hadn’t even considered. Might opting to switch off some sub-personal module that mediates suffering itself constitute a form of coercion? Certainly, there are extreme cases of dissociative identity disorder (cf., the condition formerly known as multiple personality disorder, where this dilemma might rear its head. Dissociative personality disorder is now often conceived as dimensional rather than categorical. Even in “healthy” normals, the unity of the self - both synchronic and diachronic - is radically incomplete. And what about people who have had a corpus callosotomy? (split brain” patients:
Or people with florid schizophrenia?
All I’ll say here is that, in the last analysis, these are marginal cases. Further, there is a fundamental difference between the biology of coercive (un)happiness imposed by some external agency and any internal dilemma posted by wrestling with different aspects of oneself.
What about the other potential pitfalls you raise? Well, I’d make exactly the same response to critics of, say, radical life-extension. Should this be our overarching goal, i.e. no one should be forced to undergo the biology of aging or experience below hedonic zero? Policy maskers should try and guard against the sorts of worries your raise within the bounds of the overall project. Or is the abolitionist project - or a radical anti-aging program - itself irredeemably flawed?
As you know, I think quasi-immortal posthumans will be animated by gradients of bliss orders of magnitude richer than today’s peak experiences. By posthuman standards, humans are pain-ridden savages. But the transition is likely to be messy.
* * *
Thanks Rick. “If I won the lottery, I could live happily ever after.” No, most of us are too sophisticated to say such things. Knowledge of the hedonic treadmill - and comparative outcomes for lottery winners and paraplegics - is now quite widely acknowledged, in the abstract at least. Yet I wish more futurists and social engineers would take the lesson to heart.
CygnusX1, can one commit acts of violence against an abstraction? I think this is a metaphor too far. For sure, if high-tech Jainism ensures that sentient beings are no longer predated in our wildlife parks, then the genetic tweaking of the archaic lion and crocodile genome entailed will not involve the explicit prior consent of would-be predators. But what about the consent of the would-be victims? The only literal violence involved here is upholding the status quo. Do you think sentient beings should be violently disembowelled, asphyxiated and eaten alive? Granted, right now this is a philosophical question. Shortly it will be a pressing ethical choice.
You urge greater dispassion on the part of humans. Others would argue for greater passion. Nothing in abolitionist bioethics entails taking a stand either way. Compare the role of, say, a physical pain specialist. The job of the chronic pain specialist is not to expound his conception of the good life. Rather it’s to ensure that his patients can enjoy physically pain-free lives that maximise their opportunities to flourish. Likewise with tomorrow’s specialists in phasing out the biology of involuntary “psychological” distress - depression, anxiety disorders, jealousy and our nastier Darwinian adaptations. Possession of an exalted hedonic set-point is equally consistent with e.g. hypomanic exuberance or “dispassionate” meditative tranquillity - and a galaxy of other temperaments and lifestyle options besides. Barring reward pathways enhancements, however, most future life will be subjectively mediocre - at best - just as now.
* * *
CygnusX1, on any strict construction of “identity” you are undoubtedly correct. A human or nonhuman serial predator who ceases to prey on other sentient beings is no longer the same. At it’s most extreme, we may take the Buddhist or ultra-Parfitian that there is no such thing as a enduring personal (or infrahuman) identity over time. Heraclitus put it well 2500 years ago. “No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.”
However, irrespective of our position on identity, there is surely fundamental distinction between claiming that ideally human and nonhumans alike should be free-living and the claim that we should be “wild”. Undomesticated “wild” humans have behaved in all sorts of ways we would now recognise as deeply unethical. I won’t catalogue them here. Steven Pinker does a good if gruesome job in “The Better Angels of Our Nature: Why Violence Has Declined”
The fact free-living humans are now (partially) tamed and domesticated tends to enhance our overall freedom more than it constricts. Compare, say, the practicalities of air travel. Yes, the prospect of “policing” Nature probably sounds Orwellian. But compassionate stewardship of the living world promises to confer greater opportunities to live free and flourishing lives than most sentient beings enjoy today.
* * *
CygnusX1, for the most part, I find Buddhist ethics admirable. Buddhists locate suffering and its conquest at the heart of the world. It’s hard to know what the historical Gautama Buddha would make of biotechnology. The limited historical evidence suggests that Gautama Buddhist was a pragmatist, not an ideologue. Crudely, if it works, do it. Thus meditation and the Noble Eightfold Path have undoubtedly offered solace for millions of believers over the centuries. However, meditation and the Noble Eightfold Path don’t genetically recalibrate the hedonic treadmill. They don’t modify the nastier bits of the genetic code we pass on to our children. Nor can they abolish the horrors of the food chain. If we endorse a Buddhist vision of a cruelty-free world, then we need to embrace the one technology that can deliver the well-being of all sentience. I reckon Gautama Buddha would approve.
* * *
Futurephilosopher, when considering human predators and their victims, we normally believe that the interests of the victim should take precedence. Why reverse this precedence when the predators and victims are nonhuman?
Either way, the seemingly irreconcilable conflict of interest between predators and their “prey” can be overcome via utopian biotechnology. Let’s consider your example. For reasons of energy efficiency, lions tend to be “lazy”. Lions hunt only when they (or their cubs) are hungry. So laying on in vitro meat in tomorrow’s wildlife parks can ensure big cats don’t suffer - whether from frustrated predatory instincts or indeed from hunger pangs. In the long run, however, perhaps some genetic tweaking is in order too…
* * *
Futurephilosopher, first, I think it’s great that the issue of free-living animal suffering is being explored. I never thought this would happen in my lifetime. Presumably until this century the cruelties of Nature were simply seen an inescapable fact about the world rather than technically optional. But I’m also aware there’s something fanciful-sounding about discussing compassionate interventions in traditional ecosystems while humans are systematically killing and abusing billions of sentient beings in our factory-farms and slaughterhouses. That said…
Yes, it’s anthropomorphic to call a lion a “murderer”. It’s not anthropomorphic to describe a lion as a serial killer. For sure, lions and other predators help prevent an ecologically catastrophic population explosion of herbivores. Smallpox, the Anopheles mosquito and other pathogenic organisms traditionally helped prevent an ecologically catastrophic population explosion of humans. The question is whether there are ethically more acceptable forms of population control - in human and nonhuman animals alike. Thanks to technologies of fertility regulation, the answer is now clearly yes.
Zoos? Ethically, we’d agree that neither human nor nonhuman animals should ideally be held captive. Yet to be free-living isn’t synonymous with “wild”. Thus do some young human males today feel frustrated because their atavistic warrior, hunter (and sexual) impulses are checked by the constraints of modern civilisation? Undoubtedly yes. But the solution to their frustrations is not “rewilding”. Likewise with nonhuman predators.
Biblical prophecy? Yes, the echoes are deliberate. The lion and the wolf shall lie down with the lamb (etc). The reason for liberal use of quotes from the scriptures (Christian and otherwise) is to convince the traditional-minded that abolitionist bioethics is simply an extension of their existing values into the modern era, not a plea to embrace some revolutionary new ethic. Certainly, the goal of phasing out the biology of involuntary suffering shouldn’t be conceived as exclusive to secular classical utilitarians. It’s a precondition, I believe, of any advanced civilisation.
* * *
Futurephilosopher, should we really avoid wiping out, say, malaria for fear of triggering some unforeseen side-effect beyond our power to anticipate? How about smallpox? We’ll always need to weigh risk-reward ratios. Either way, when deciding whether or not to mitigate - and eventually abolish - the cruelties of Nature, let us recall that humans already interfere - massively - in ecosystems across the living world, whether via uncontrolled habitat destruction to captive breeding programs for big cats to “rewilding” etc. So the question is not whether to intervene but rather what principles should govern our interventions. Should we endorse the ideology of so-called conservation biology? Or instead an ethic of compassionate stewardship? Or perhaps some combination of both?
You worry about anthropomorphism. If anything, I don’t think we’re “anthropomorphic” enough when weighing the depth and significance of nonhuman animal suffering. The experience of hunger, thirst, fear - and the terrible experience of being asphyxiated, disembowelled or eaten alive - is not mediated by different genes, neurotransmitter pathways or cellular structures in human and nonhuman animals. On the contrary, the same genetic and molecular pathways (and behavioural responses to noxious stimuli) of our core emotions are strongly conserved in the vertebrate line. Of course this convergence of evidence doesn’t amount to a rigorous proof that the pleasure-pain axis unites all sentient beings. But then we can’t disprove radical philosophical scepticism about other (human) minds either. We’re dealing with an inference to the best explanation.
Involuntary suffering of any kind is shortly going to become optional. What right have humans to conserve it??
* * *
* * *
Craig, empirically, heightened mood is associated with a stronger sense of agency, a greater sense of self-efficacy, and belief in free will. By contrast, depression is associated with fatalism, learned helplessness and behavioural despair. Compare the effect of "power drugs" such as amphetamines and cocaine.
....I was highlighting how none of us is plugged directly into the real world. When we're awake, our skull-bound simulations do track some of its fitness-relevant features. Human responsibility to "mirror" the external world is discharged, I think, when we have done literally everything that rational agency can do to help suffering sentients elsewhere. (cf. "Suffering in the Multiverse") We'll then be free to live in VR designer paradises of our own making. Until then, humans can use radical reward pathway enhancements to enrich our lives - but without sacrificing our responsibilities.
* * *
Thanks Craig. Suffering in the Multiverse is the bleakest piece I've ever written. I console myself with the thought that its conception of Reality may be hopelessly ill-conceived. We don't really know what's going on. Nick Bostrom, for example, thinks intelligent agents might phase out the biology of suffering and later recreate it in the guise of an ancestor simulation - and quite conceivably we're living in one of them.
* * *
Which superpowers will elude our superhuman successors?
(with thanks to John)
("Infographic: A Massive Chart of Every Superhero's Powers Ever.")
[on happiness]
The Moral Maze
Happiness: a live debate tonight between a psychologist, a psychoanalyst, a Buddhist and a transhumanist. Yes, a moral maze - but some of us think we know where's the (vegan) cheese. Freud hoped psychoanalytic therapy could transform "hysterical misery into common unhappiness"? I hope transhumanists can do a little better - though I'm not convinced Middle England is ready for a utilitronium shockwave.
Dave, Thanks. Yes, I was hoping to engage Oliver about his interpretation of e.g.
("Identification of risk loci with shared effects on five major psychiatric disorders: a genome-wide analysis")
or indeed
But this wasn't the ideal setting.
Vesna, indeed so. Crazily enough, BBC Radio 4 is what passes for highbrow in the UK. I guess I'm lucky in one sense. At a pinch, my core views can probably be summed up in a single sentence ("Let's use biotechnology to engineer the well-being of all sentience"). Some people's views can't even be travestied in less than ten.
Eric, indeed. The route to nirvana does not lie in selective surgical ablation of our "sadness centres". Our attachments would lack depth if our death or misfortune didn't trigger a pronounced lowering of mood in friends and loved ones. But from what hedonic set-point should our mood be lowered - and to what depth? I'd want my death or misfortune (reversibly) to diminish the well-being of friends and loved ones. But IMO I'm not ethically entitled to want them to suffer on my account.
Eric, yes, a lot of room. Rare cases of extreme hyperthymia aside, why hasn't Nature thrown up people animated by gradients of bliss who aren't also manic? I guess one partial answer is that our genes simply don't care. As long as the relevant informational sensitivity is achieved, there's no selection pressure in favour of gradients of well-being rather than ill-being for their vehicles. The existence of both functionally adequate happy and malaise-ridden folk alike might suggest this. On the other hand, it's also plausible that a predisposition to different forms of mood-congruent cognitive bias - crudely, wearing rose-tined versus blue-tinted spectacles - can be adaptive in different circumstances. If we were all depressive realists, humans would still be living in caves. But sometimes a predisposition to depressive realism can be fitness-enhancing - until the coming reproductive revolution of designer babies, at any rate.
One powerful hint that the "raw feels" of phenomenal pain and misery aren't computationally indispensable is offered by the growth of artificial intelligence. The performance of e.g. Alpha dog (cf. would not be improved by pain qualia. Nor would Deep Blue play better chess if the program experienced anxiety whenever its king were put in check. Unlike many of my transhumanist colleagues, I do believe that organic robots are special. In my (idiosyncratic) opinion, macroscopic quantum coherence is a prerequisite of phenomenal object binding in our world-simulations: otherwise we'd be zombies. But this conjectural specialness of organic robots doesn't extend, I believe, to the inevitability of suffering - as if nasty raw feels were somehow functionally indispensable to organic minds of any kind.
Perhaps compare distinguished transhumanist scholar Anders Sandberg ("I do have a ridiculously high hedonic set-point": .
No technical reason exists why we can't selectively breed or deliberately create [via "designer genomes" and autosomal gene-editing tools] a civilisation of super-Anders. At any rate, thanks to biotechnology, humans will shortly have the opportunity to choose our own hedonic range. Do we want to conserve experience below hedonic zero - or relegate it to the dustbin of history?
By "consciousness", Garret, are you referring to reflective self-awareness - rather than, say, phenomenal pain or panic, neither of which entail sophisticated model-building? Either way, if we hope to conserve the ontological unity of science, then we must derive - or show how we might in principle derive - all aspects of our experience within the mathematical straitjacket of our best theory of the world, quantum physics. As far as I can tell, phenomenal binding and the (fleeting, synchronic) unity of consciousness is classically impossible. If phenomenal binding of distributively processed features in the CNS is not a manifestation of quantum coherence, then all bets are off IMO. We should be loathe to abandon reductive physicalism.
Phenomenal binding would seem classically forbidden. What is the alternative? Unless quantum mechanics breaks down in the mind-brain, we know macroscopic quantum coherence in the CNS must occur. As you suggest, Garret, thermally-induced decoherence must destroy such states within (what we naively suppose is) a vanishingly short time, maybe picoseconds or less. Naively, this is simply too fast for any computational and/or phenomenological work. All we'd find experimentally probing at such ridiculously short temporal resolutions is "noise". But this is an assumption, not an empirically tested theory. Instead, I would make a falsifiable prediction. When we eventually probe the mind-brain at such a fine-grained scale, we'll discover not "noise", but the formal shadows of the "bound" macroscopic phenomenal objects of our everyday world-simulations, i.e. a perfect structural match between phenomenology and neurological microstructure whose ostensible absence helps propel David Chalmers into his naturalistic dualism. (cf. That said, the ramifications of combining Strawsonian physicalism with quantum coherence in organic minds are not well suited to exploration on a show like The Moral Maze.
[on mind uploading]
Universal destructive uploading might be an elegant solution to the problem of suffering; but would you press the UPLOAD button?
Mind Uploading
David Pearce versus Ben Goertzel
* * *
Two very different perspectives on mind uploading:
Ben Goertzel and I both take pan-experientialism /Strawsonian physicalism seriously. Where we differ is over whether classical digital computers could ever solve the phenomenal binding problem.
* * *
Scepticism about technical feasibility can prevent one adequately exploring the potential ramifications of uploading if one's theory of mind is wrong. Robin Hanson has written a stimulating but as yet unpublished book on what an upload-dominated civilisation might consist in. Brian Tomasik worries that the creation of digital sentience will be the recipe for untold suffering, not digital nirvana.
* * *
If it walks like a duck, quacks like a duck, looks like a duck....then it's still a digital zombie IMO.
Advocates of the Strong Physical Church-Turing Thesis would disagree. I'm personally unclear just how (un)faithfully any digital zombie we could physically create could behaviourally simulate an organic sentient. Solving the binding problem is perhaps the greatest cognitive achievement of organic minds over the past half-billion years; and we scarcely have a clue how the mind/brain routinely carries it off.
* * *
Is Daniel Dennett a better sailor or philosopher?
("The philosopher Daniel Dennett talks about his 16th book, “Intuition Pumps and Other Tools for Thinking,” which W.W. Norton is publishing next week.")
* * *
If we had a profound understanding of the existence of consciousness, a precise quantitative understanding of its trillions of different textures and a solution to the IMO classically insoluble phenomenal binding problem, then just maybe we might contemplate non-destructive uploading. Sure, whole-brain-emulation is supposed to allow us to finesse our ignorance. But unless we already know what is functionally relevant and what is computationally incidental, how can we begin? IMO a revolutionary breakthrough in our understanding of consciousness must come first.
* * *
Our best theory of the world seems to be telling us that reality supports googols of nearly type-identical copies of "you". I can't say I feel tempted to create digital counterparts of organic minds, even if it were technically feasible. Smart angels perhaps, but not malaise-ridden humans...
* * *
The show went well. Ben and I actually agree on a lot of things - not least the credibility of Strawsonian physicalism as a precondition for any solution to the Hard Problem of consciousness. We differ over whether the phenomenal binding problem has a classical or quantum mechanical solution - and on whether "uploads" could ever be more than zombies.
* * *
A Moravec transfer? Austin, I may not be the best person to ask because I'm a sceptic about the possibility of any enduring personal identity over time. IMO such a metaphysical notion is a genetic fitness-enhancing illusion - pragmatically useful but false. However, I'm also sceptical that the neurons in an awake/dreaming mind are classical objects as normally conceived either - and so I don't think a "Moravec transfer" in any guise would work.
* * *
Austin, we tend to find quantum mechanics "spooky", and the classical world normal. But from a physicist's perspective, a bigger mystery is the emergence of classicality - or something like it - from a world that is wholly quantum. If e.g. edges, textures, motions, colours etc of experiential objects were really processed distributively in thr CNS by discrete classical neurons, then how could such patterns of membrane-bound "mind dust" generate bound phenomenal objects and unitary subjects of experience? The question isn't whether macroscopic quantum coherence exists but rather whether, as critics claim, it is too short-lived to be of any conceivable computational or phenomenal relevance.
* * *
Tanzanos, intuitively you're right - and the functionally unique properties of the carbon atom or liquid water are irrelevant to consciousness. But the prediction of quantum mind theorists, i.e. that classical computers can never support non-trivial consciousness, has been borne out to date. I anticipate that the march of the zombies will continue.
* * *
"These days I barely exist above zombie."
World-wide digital zombification would improve mental health but a more circuitous route seems likely. Please let me know if I can ever help Gregory.
* * *
Marko, assuming I'm wrong about the impossibility of digital sentience, I agree. But then is your namesake who wakes up tomorrow morning anything more than an imperfect copy of you now?
* * *
Jason, intuitively you're right. Yet dreamless sleep interrupts continuity. For what it's worth, IMO there are only here-and-nows strung together in particular sequences thanks to natural selection.
* * *
Depending on how seriously one takes Everett, reality supports googols of effectively type-identical copies of you that rapidly decohere ("split") and (very) rarely fuse. I must assume that most of my namesakes feel they are in some way special or privileged too. On the other hand, I remain sceptical that I have a digital mindfile in anywhere but Platonic Heaven.
* * *
We may quantify the degree of "splitting" and likelihood of fusion with decoherence functionals. For a macroscopic system like the mind-brain, fusion is vanishingly rare.
* * *
Marko, the kindest thing I could do to my namesakes would be to blast them with a utilitronium shockwave launcher; but I fear they're mostly out of range.
* * *
Gregory, I fear "mangling" is more common:
* * *
Mass destructive uploading to digital nirvana is one of the more exotic forms of existential risk. Perhaps I should support it.
* * *
Indeed. I might urge the world to join me. However, the alleged feasibility of whole-brain emulation assumes that the mind-brain is essentially classical, i.e. that phenomenal binding is not the signature of macroscopic quantum coherence. Here we may differ.
* * *
But did Nature get there first?
("Google Delves Into Quantum Artificial Intelligence: Google has launched a programme aimed at using quantum computing to improve the way machines learn in order to solve tough problems")
* * *
But not all quantum mind theorists buy into Penrose's collapsing wave functions.
("The Singularity Is Near: Mind Uploading by 2045?")
Some futurists predict humans will be able to upload their consciousness to computers in the near future.
* * *
"The problem with the transhumanist movement is that there's only one path to heaven."
("What is transhumanism? As we approach the last days...")
Christianity: "One woman's lie about having an affair that got seriously out of hand."
* * *
The rise of Robo insentiens?
("Modest Debut of Atlas May Foreshadow Age of ‘Robo Sapiens’")
* * *
("Transhumanism debunked: Why drinking the Kurzweil Kool-Aid will only make you dead, not immortal")
For a rebuttal:
* * *
Are organic robots best upgraded or replaced?
("What will the future hold for cyborgs, the fusion of humans and machines?")
* * *
Aya, indeed. I'm genuinely agnostic, however, on just how closely a program running on a real-world classical digital computer could emulate the behaviour of Homo sapiens.
* * *
I think we agree here! A distinct though related question is purely behavioural. When, if ever, will a classical digital computer be able to pass the Turing Test - and if not, what are the (perhaps exceedingly subtle) questions that are likely to trip it up?
Of course, I can't prove that digital sentience is impossible - any more than we can disprove philosopher Eric Schwitzgebel's contention that the United States is conscious. Rather consciousness somehow "switching on" would amount to a breakdown of reductive physicalism.
("The Splintered Mind: Is the United States Conscious?")
* * *
Yes Victor, great movie IMO! Like most works of sci-fi, it works best if you suspend disbelief...
* * *
Sentience will soon be unnecessary for taking today's IQ tests:
("Computer smart as a 4-year-old")
* * *
"Neuromorphic" zombies:
("The machine of a new soul")
If lab-grown brains proliferate, will it be irrational to wonder if you are one of them?
Miniature 'human brain' grown in lab
An ethical and epistemological disaster looms: the Simulation Argument rides again....
* * *
"How is information born?"
A profound question Rui. My best guess it that information is never truly born, merely conserved - at zero. Just as (controversially) the world's positive mass-energy is cancelled out by negative gravitational potential energy to zero (cf. Alexander Vilenkin elaboration of Ed Tryon's conjecture, “Is the Universe a Vacuum Fluctuation?", by analogy, perhaps zero information = all possible self-consistent descriptions = Everett's multiverse. Whether some sort of "zero ontology" is the ultimate basis of Reality - and an explanation of the age old conundrum, "Why is there Something Rather than Nothing?" - remains to be seen.
* * *
Yes, Smolin ("Time Reborn") versus Weyl / Barbour ("The world doesn't happen, it just is")
("Time Regained! by James Gleick | The New York Review of Books
"Time Reborn: From the Crisis in Physics to the Future of the Universe" by Lee Smolin.")
* * *
Rui, intuitively, yes. But perhaps see e.g. Rolf Landauer's "The Physical Nature of Information". A physicist would say that the information content of observable universe can't exceed 10120 quantum bits of information expressed in fundamental Planck units.
* * *
("The physical nature of information" (1996) by Rolf Landauer.)
[on machine consciousness]
Will machines be ever be conscious?
Ben Goertzel on 'Consciousness and Thinking Machines' at the Asia Consciousness Festival
The most intense forms of consciousness - e.g. agony, orgasm, panic - are evolutionarily ancient. They have little obviously in common with rational thought. Rather than asking if machines can be conscious, I think the interesting question is: can nonbiological machines be conscious? The obvious answer is yes. Intuitively, the functionally unique value properties of the carbon atom are too low-level to be functionally relevant. But we don't know this. Compare the view that primordial life elsewhere in the multiverse will be carbon-based. This conjecture was once dismissed as carbon chauvinism. It's now taken very seriously by astrobiologists.
For what it's worth, I doubt a classical digital computer will ever be non-trivially conscious, let alone generate unitary "bound" perceptual objects or a unitary subject of experience (cf. ) If so, a classical digital computer will never be able to e.g. explore the manifold subjective properties of mind in the manner of its organic counterparts. IMO the future belongs to sentient biological robots, not their supporting cast of silicon zombies.
* * *
Eray, apologies (I try to be restrained about hotlinking my own work!) but I've long argued precisely for such Strawsonian physicalism. (cf. However, the only scientifically literate version of panpsychism doesn't, by itself, explain how either organic robots or digital computers could be anything other than zombies. For we still need to solve the binding problem - and the closely related Moravec's paradox. (cf.'s_paradox) Irrespective of how they are functionally connected, how can 80 billion odd neurons, conceived as discrete, spatially distributed and membrane-bound classical information processors, generate unitary phenomenal objects, unitary phenomenal world-simulations, or a (fleetingly) unitary self? Why aren't we mere patterns of "mind dust"? (cf. ) The Explanatory Gap appears unbridgeable as posed. Our phenomenology of mind seems as inexplicable as if 1.3 billion skull-bound Chinese were to hold hands and suddenly become a unitary subject of experience. Why? How?
Finding a theory of consciousness that isn't demonstrably incoherent or false is a challenge. But consider Tegmark's well-known critique of quantum mind (cf.
Let's assume Strawsonian physicalism is true but also that Tegmark rather than his critics is correct: thermally-induced decoherence destroys distinctively quantum mechanical effects in an environment as warm and noisy as the brain within 10-13 of a second - rather than the much longer times sometimes claimed by Hameroff and others. What would it feel like "from the inside" to instantiate a quantum computer running at 10 13 irreducible quantum-coherent frames per second - computationally optimized by hundreds of millions of years of evolution to deliver real-time simulations of the macroscopic world?
True or false, a strong prediction of this conjecture is that classical serial digital computers will never be non-trivially conscious.
* * *
Eray, I certainly don't claim Levine's notorious Explanatory Gap can't be bridged by science! Rather the gap is unbridgeable given an ontology of materialism. Traditionally, materialism and physicalism have been treated as close cousins. Strawson, anticipated by Michael Lockwood (and others), has convincingly shown this needn't be the case. Nor IMO are we entitled to claim that only an organic brain could be a unitary subject of experience. For we simply don't know what may or may not be possible in a future era of mature artificial quantum computing. Instead, I was arguing in the context of the [phenomenal] Binding Problem that a classical serial digital computer can never support experiential bound objects or unitary subjects of experience. In short, even granted Strawsonian physicalism, i.e. panpsychism couched in the formal language of relativistic quantum field theory, digital computers will always be zombies.
* * *
Eray, empiricism and an empirical methodology are different. Science depends on the latter, not the former - unless, that is, you're arguing for an anti-realist instrumentalism that aims simply to "save the phenomena". Are you arguing against the completeness of post-Everett QM in favour of some kind of dynamical collapse theory? If not, then the existence of quantum superpositions of large biomolecules is entailed by the completeness of QM. Their irreducible existence is distinct from the question of whether they are (or are not) sufficiently long-lived to play any computational / functional role in living organisms.
How do we know that the population of China, or a termite colony, or the PC on your desk, or numerous other information processing systems, aren't unitary subjects of experience? In short, we don't! But if we do invoke such radical forms of emergence, then we are again faced with:
* * *
Eray, the existence of macroscopic superpositions in the brain is a prediction of our empirically best tested theory of the world, quantum mechanics. This doesn't prove such superpositions exist. Maybe quantum mechanics breaks down in the brain. Or maybe (and this would be a more common view) they are too short-lived in a warm and noisy environment such as the brain to be computationally and/or experientially relevant.
The nature of object binding, and the breakdown of the (fleeting, synchronic) unity of the self, is perhaps best illustrated by neurological syndromes in which binding partially breaks down. Consider e.g.
I'm not entirely clear why you're invoking Penrose. Recall I'm arguing in favour of the completeness of QM, not some unphysical "collapse of the wave function". IMO it's uncharitable to describe Penrose as a "crank". He is a brilliant mathematician and physicist. But there is indeed no supporting evidence, whether theoretical or empirical, for the Penrose-Hameroff Orch-OR (orchestrated objective reduction) conjecture.
The question of whether macroscopic superpositions in the brain exist is distinct from the question of whether they do any computational work - and whether or not they are relevant to consciousness. [My conjecture is that they are indispensable to phenomenal object binding, i.e. that ostensibly discretely and distributively processed edges, textures, motions, colours etc are fleetingly irreducibly bound when one apprehends a perceptual object in one's world-simulation. On this story, classical serial digital computers will never be non-trivially conscious.] Eray, sorry, I'm unclear whether you are arguing that 1) macroscopic superpositions in the CNS don't exist? (i.e. QM breaks down and must be supplemented with some kind of dynamical collapse theory) or 2) they do exist, but they are computationally and/or experientially irrelevant?
Thanks for clarifying your position Eray. But is there any evidence at all for e.g.
Hameroff argues that quantum coherence in neuronal microtubules is sustained for far longer (milliseconds) than critics like Tegmark (femtoseconds) is willing to accept. Maybe so; I haven't yet seen any convincing evidence. Instead, let's assume Tegmark is correct. Neuronal processes mediating edge, colour, motion (etc) detection can be in a unitary, irreducible quantum coherent state for no more than a hundred femtoseconds or so.
Intuitively, this kind of timescale is hopeless for solving the binding problem. For we perceive our surroundings with a time-lag of scores of milliseconds - a truly staggering feat of computation, for sure, but nothing like sub-picosecond timescales. Nerve impulses travel up the optic nerve at a sluggish 100 m/s or so.
However, IMO the account above is underpinned by a false theory of perception. Philosopher Bertrand Russell was widely mocked for his oft-repeated claim that one never sees anything but the inside of one's own head. But in a critical sense, this is true. The difference between being awake and dreaming is not that when one is awake, the mind-independent world somehow stamps its features on the contents of one's world-simulation. Rather the most that the external world can do, via coded impulses from the optic nerve etc, is to select from a pre-existing menu of mind/brain states. Assuming Strawsonian physicalism, then, what would it be like to instantiate 1013 irreducible quantum coherent mental states per second? When we're awake, these states would coarsely track fitness-relevant patterns in the local environment with a delay of 150 milliseconds or so; when we're dreaming, such selection (via optic nerve impulses etc) is largely absent.
As it stands, this is mere hand-waving. An adequate theory of mind would rigorously derive the properties of our bound macroqualia from the (hypothetical) underlying-theoretic microqualia. But if the story above is on the right lines, then a classical digital computer or the population of China (etc) will never be non-trivially conscious.
* * *
Eray, to describe a quantum superposition as "irreducible" isn't mumbo-jumbo; it's a tautology. This doesn't prove any such beast can exist in the brain. Maybe quantum mechanics breaks down in the CNS. Maybe we need to devise a dynamical collapse theory. But you've also acknowledged that there is no evidence that wave functions ever collapse. This makes your denial of the possibility of even fleeting macroscopic superpositions puzzling.
Sorry, which electromagnetic theory of consciousness did you want me to critique? Some versions are consistent with Strawsonian physicalism. Others are implicitly dualist.
* * *
"Irreducible" in the sense of not reducible to the behaviour of discrete classical atoms. There is no evidence QM breaks down in the brain. Hence the reason for inferring the existence of macroscopic superpositions that are rapidly destroyed owing to thermally-induced decoherence. Electromagnetic fields are ubiquitous in everything from the enteric nervous system of the gut to the brain in a dreamless sleep. We need to understand the necessary and sufficient conditions for phenomenal object binding and the experiential unity of perception; here EM theorists of consciousness differ.
* * *
Eray, right now I can simultaneously apprehend a dozen or so different figures walking at varying distances in front of my body-image. Someone with simultanagnosia or akinetopsia
cannot do this. How would you describe what we have - and they lack?
Neither object binding nor the experiential unity of perception are artefacts of folk-psychology. They are fitness-enhancing adaptations that neuroscience must explain. But how?
The existence of macroscopic superpositions is a prediction of any [realist] theory of quantum mechanics that doesn't invoke state vector collapse. To date, much of the debate has focused on decoherence timescales. Such superpositions are exceedingly long-lived if conceived in terms of natural Planck units, and exceedingly short-lived if conceived on everyday folk psychological timescales.
Even assuming Strawsonian physicalism, their experiential and/or computational relevance to organic minds remains to be shown. But this is a radically different question from the claim such macroscopic superpositions don't exist.
* * *
Eray, sorry, I fear you've lost me. How can anyone be guilty of "Cartesian materialism" while simultaneously "even more vitalist / dualist than Searle"? The whole point of Strawsonian physicalism is its uncompromising monism.
* * *
But Eray, I'm not a materialist: I've been arguing against it in favour of Strawsonian physicalism. Strawsonian physicalism is a conjecture about the intrinsic nature of the physical. Nor is there any dispute about our massively parallel architecture. Rather the question is whether a purely classical parallelism is consistent with the phenomenology of experience.
* * *
Eray, at times an unworthy suspicion crosses my mind...
("People Argue Just To Win, Scholars Asset")
* * *
Eray, Strawsonian physicalism is...physicalist. There is no "element of Reality", as Einstein puts it, that is not captured by the equations of physics and their solutions. The materialist claims that the intrinsic nature of the world's fundamental fields the quantum-theoretic formalism describes (poetically, the "fire" in the equations) is non sentient. By contrast, advocates of electromagnetic theories of consciousness and thoroughgoing Strawsonian physicalists alike beg to differ.
By way of distinction, proponents of specifically electromagnetic theories of consciousness need to explain why and how matter fields described by Fermi–Dirac statistics are non-conscious whereas one field described by Bose-Einstein statistics is identical with primordial consciousness.
The "fire" allusion is of course a nod to Stephen Hawking. Like most materialists, Hawking acknowledges we have "no idea of what breathes fire into the equations and makes there a world for us to describe" while at the same time dismissing any kind of panpsychism or monistic idealism.
In the language of Kant, the formalism of physics does not disclose the noumenal essence of the world. Orthodox materialists may assume that the fundamental fields are nonconscious; but this is an assumption, not a discovery.
* * *
Here at least we agree. Behaviourism is a false theory of mind. (hence the joke: two behaviourists make love. One then says to the other, "That was good for you. Was it good for me?")
* * *
I'd agree that subjective experience has a physical explanation. Its innumerable textures are [I assume] exhaustively encoded by the formalism of physics. What's critical is that we don't prejudge the intrinsic nature of the "physical" that the equations describe.
* * *
Amusing as in philosophical slapstick, Thomas?! I hope not...
If one a Strawsonian physicalist, then micro-qualia or "mind-moments" are ubiquitous. But this kind of naturalistic panpsychism is not a license for animism. Mere aggregates of discrete psychic pixels, so to speak, aren't a unitary subject of experience apprehending multiple bound objects, irrespective of their functional connectivity.
How about digital computers? Even if Strawsonian physicalism is true, and even if we could detect the noise of fleeting macroscopic superpositions internal to a CPU, we've no grounds for believing a digital computer [or any particular software program it runs] can be a subject of experience. Their fundamental physical components may be [or may not] be discrete microqualia rather than the insentient silicon (etc) atoms we normally suppose. But their physical constitution is computationally incidental to the execution of sequence of logical operations they execute. Any distinctively quantum mechanical effects are just another kind of "noise" against which we design error-detection and -correction algorithms.
So how are organic minds any different? What explains the phenomenology of human experience? Yes, we're massively parallel, but so are so are subsymbolic connectionist architectures (question-beggingly called "neural networks") - and their parallelism is purely classical. The story I'd tell is boringly orthodox in one sense. Our minds are formally described by the connection and activation evolution equations of a massively parallel connectionist architecture, with phenomenal object-binding a function of simultaneity: different populations of neurons (edge detectors, colour detectors, motion detectors etc) firing together to create ephemeral bound objects. But simultaneity can't, by itself, be the answer. There is no one place in the brain where distributively processed features come together into multiple bound objects in a world-simulation instantiated by a fleetingly unitary subject of experience. We haven't explained why a population of 80 billion odd discrete neurons, classically conceived, isn't a zombie in the sense that China [1.3 billion skull-bound Chinese minds] or a termite colony or a silicon robot is a zombie.
None of the above considerations goes to show that what we're calling simultaneity is actually the functional signature of 1013 per second unitary macroscopic quantum-coherent states. Macroscopic "mind moments" must occur if (1) Strawsonian physicalism is true and (2) macroscopic superpositions are real; but couldn't they just be functionally incidental psychotic "noise"? Why suppose that Nature has been computationally optimising the selection of sequences of macroscopic "mind moments" in organic robots to track fitness-relevant patterns in the local environment for hundreds of millions years?
[to be continued...]
* * *
In the Penrose-Hameroff model, consciousness does not cause the [alleged] collapse of the quantum wave function. Rather, consciousness is supposed to be a particular kind of self-collapse involving quantum gravity. Quantum superpositions comprising multiple coexisting possible actions or experiences are supposed to exist in some sort of pre-conscious state that becomes conscious upon reaching a particular threshold: the moment of self-collapse.
No, I'm not remotely convinced either. But then the emergence of conscious from the "pre-conscious" is no less of a mystery within the conceptual framework of orthodox materialism.
* * *
The best contemporary treatment of the world-simulation metaphor I know is Antti Revonsuo's "Inner Presence". And not a collapsing wave function in sight.
Most of us, at least fleetingly, instantiate:
I guess one could argue that those of us who don't have simultanagnosia or motion blindness (etc) merely seem to instantiate a unitary subject. But in the realm of pure phenomenology, the distinction between appearance and reality collapses.
* * *
Eray, if someone tells me that [phenomenal] object binding and the [synchronic] unity of consciousness (cf. are illusory, then it's like being told pain is an illusion. I'm left scratching my head over what the speaker means.
OK, to switch modalities, imagine if someone tells you that the phenomenology of listening to a piece of music is an illusion. All that exists are individual sequences of discrete musical notes of the different instruments: there is no subject of experience enjoying the symphony. How would you respond?
* * *
Dustin, just a quick note about monism. Strawsonian physicalism is a conjecture about the ultimate "fire" in the equations. As such, it's a conjecture about substrate, not about the nature of any information-processing role that qualia may [or may not] play. In the case of organic robots, at least, all sorts of experiences, e.g. phenomenal pain, do appear typically to play an information-processing role. Such a role is absent in, say, a rock or in a digital computer - where the intrinsic character of the fire in the equations of no more logical relevance to a program's output than whether the CPU is built of silicon or gallium arsenide.
* * *
Dustin, if I had to guess, the solutions to the master equation of [utopian] physics yields the field-theoretic values of microqualia. Summing these numerically encoded values of microqualia cancels out to zero i.e. reality has no net information at all.
Zero information = all possible self-consistent descriptions = Everett's multiverse.
However, exploring a zero ontology takes us beyond Strawsonian physicalism.
* * *
Eray, no one here is arguing in for the Penrose-Hameroff Orch-OR theory. You'll forgive me if I don't attempt a FB primer on consciousness and computational neuroscience, but I also note (no more) the strengths and weaknesses, as I see them, of connectionism (e.g. Aug. 2).
I see no reason to believe in a "Cartesian theatre" as defined by Dennett. But if one is an inferential realist about perception (cf. the world-simulation metaphor, best defended by Revonsuo), then phenomenal sunsets and skyscrapers - and one's cross-modally-matched phenomenal body-image - really are in the head.
* * *
Eray, here at least we agree. But to many philosophers, the world-simulation metaphor invites (IMO mistaken) charges of radical scepticism or Berkeleyan idealism.
* * *
Eray, I think our ignorance is too deep to describe dualism as "silly" as distinct from aesthetically unappealing. The only way I know to rescue monism is via Strawsonian physicalism. By contrast, positing a fundamental ontology of sentient and nonsentient fields is a form of interactionist dualism. How and why is one particular bosonic field supposed to instantiate the ontological novelty of consciousness in a hitherto insentient world? We are being asked to endorse a very strong sort of ontological emergence. In short, EM theories of consciousness do not solve the Hard Problem of consciousness, nor do they close Levine's Explanatory Gap; they merely relocate it.
* * *
This is precisely what I'd want to ask of the EM theorist. If the world's fundamental field(s) are endowed with a primitive subjectivity, them we have a robustly monistic theory, i.e. Strawsonian physicalism. If, on the other hand, consciousness is conjectured to reside purely in the electromagnetic field, and other fundamental fields are supposed to be nonsentient, then we'll want to know why and how the electromagnetic field acquires this ontologically unique property. Was the world completely insentient until the close of the electroweak epoch?
I'd also want to understand an author means by "proto-experience". Does "proto-experience" mean nonsentient [but readily susceptible to modification to allow sentience?]? Or does "proto-experience" mean possessing only the most minimal sentience?
* * *
Jim, yes indeed. As a physicist once remarked, most of the really interesting things in the world happened during the first second. [He probably meant the first 10-43 seconds]
Above I've assumed a field-theoretic framework. But Strawsonian physicalism holds equally if we assume mathematically-defined superstrings or branes. Presumably, the modes of vibration of the fundamental strings (or p-branes) express the different values of microqualia. However, some physicists regard M-theory as a degenerating research program, as Lakatos would put it. Alas I'm not technically competent enough to offer an informed opinion.
* * *
Eray, the English expression you're looking for is "I must respectfully beg to differ". Paweł is in illustrious company. Kant called it "the transcendental unity of apperception". Contemporary analytic philosophers would speak of synchronic unity of the self. By "unity" is not meant uniformity. Rather right now within my egocentric virtual world, I am simultaneously apprehending [or instantiating] a dozen or so figures walking outside my apartment while I am listening to a piece of music. How is this co-consciousness possible? Ascribing primitive consciousness to individual, membrane-bound classical neurons - and indeed to the world's fundamental fields / strings / branes if Strawsonian physicalism is true - does not, by itself, explain either how phenomenal object binding or the phenomenal unity of perception or the synchronic unity of the self is feasible. Why aren't we quasi-zombies - mere pixelated aggregates of mind-dust, as we are [assuming Strawsonian physicalism] in a dreamless sleep?
As you know, I argue for a quantum-mechanical explanation. But for now it's just an explanation-space rather than a true explanation.
* * *
Jim, apologies, it's possible we may understand the term "quasi-zombie" differently. I wasn't alluding to free will (of which I'm highly sceptical) but rather a "zombie" in the philosophers' sense.
* * *
Dustin, to be a monist is to make a metaphysical conjecture about the world. It's not about closing a gap in any epistemological sense; epistemological gaps will always be legion. For sure, the monistic idealist - whether a scientifically literate Strawsonian physicalist or a German Romantic (etc) - is venturing way beyond the available evidence. But so is anyone who aspires to a world-view more ambitious than solipsism-of-the-here-and-now. The only phenomena to which one has direct, non-inferential access are the contents of one's own consciousness [and even here we are prone to confabulation and self-deception.] To attack the Hard Problem, the Strawsonian physicalist argues that one's own consciousness discloses that the intrinsic properties of matter and energy - the "fire" in the equations - are utterly unlike one's native materialist intuitions might suppose. Ontologically, we are not fundamentally different from the rest of the world. By contrast, the materialist (and the dualist, epiphenomenalist, etc) conjectures there is a mind-independent world of insentient material objects / insentient fundamental fields, a novel ontological category wholly beyond one's experience. Or alternatively (and perhaps more commonly) the materialist believes s/he is somehow directly presented with a world of macroscopic material objects if s/e entertains a (IMO hopelessly untenable) direct realist theory of perception.
To be a physicalist is to believe that there is no "element of Reality", as Einstein puts it, that is not captured by the equations of (tomorrow's) physics. Both the traditional physicalist/materialist and the Strawsonian physicalist believe that the behaviour of the fundamental stuff of the world is exhaustively described by the equations of physics and their solutions. But the traditional physicalist/materialist believes the fundamental fields/strings/brains are intrinsically insentient, whereas the Strawsonian physicalist believes the fundamental fields/strings/branes are intrinsically experiential.
"Trivial"? Experientially unbound. As in
For example, if one plays a game of chess, then if Strawsonian physicalism is true, then the fundamental fields/strings/branes comprising the pieces and board are experiential. But a chess piece isn't a unitary subject of experience, and the discrete intrinsic experiential properties of the ultimate constituents of the pieces are computationally incidental - of no more relevance to the gameplay than whether the pieces are wood or metal.
[on the transition to post-Darwinian life]
Is the well-being of all sentience medically feasible?
Towards The Abolition of Suffering
Reflections on the Abolitionist Project
G100 Bondurant Hall on UNC School of Medicine Campus
("New generation of rapid-acting antidepressants?")
("Drug testing could stop 'academic doping'")
Use of anti-nootropics, notably ethyl alcohol, seems de rigueur among academics. Perhaps a regime of teetotalism would unleash an intelligence explosion...
James, if the moral urgency of phasing out suffering were universally shared, its biology could be gone within this century. In practice, I'd guess the last unpleasant experience in our forward light-cone is centuries away. More troublingly, Brian Tomasik worries that the Era of Suffering in our Galaxy has scarcely begun - a totally different meta-narrative to the scenario I envisage.
An exposé of the links between abolitionist bioethics and the Third Reich:
Superhappiness, superlongevity & superintelligence?
I hope so...
[on "the mind of David Pearce"]
The end of suffering?
David Pearce Video Interview
MP4 (511Mb)
interviewer Andrés Gómez Emilsson, Stanford 2012.
• Part 1 - Why and how to get rid of suffering
• Part 2 - The rights of unborn children and deontological critiques
• Part 3 - Brave New World and No-Pleasure-No-Pain objections
• Part 4 - How can we help? And, is hedonism inauthentic or fake?
• Part 5 - Nonhuman animals shouldn't suffer either
• Part 6 - Is it Technologically Possible to Re-Engineer Ecosystems?
• Part 7 - With Great Power Comes Great Responsibility
• Part 8 - Radically Altered States of Consciousness
• Part 9 - Superintelligence
• Part 10 - Empathetic Superintelligence
• Part 11 - Metaphysics: Theory of mind
• Part 12 - Metaphysics: Time
• Part 13 - How to make biotech sound sexy?
• Part 14 - Why digital computers cannot be sentient
No one need buy into my idiosyncratic views on why classical digital computers will never be nontrivially conscious (cf. the binding problem) to sign up to phasing out involuntary suffering throughout the living world. I just answered the questions Andres put me in San Francisco!
Until gene therapy makes psychoactive drugs (potentially) redundant, I think we need research into safe and sustainable mood-brighteners - ideally, mood brighteners with a pro-social oxytocinergic action. The pitfalls, clearly, are immense. But the War On Drugs has been a catastrophic failure - and unconstitutional to boot.
[Whatever happened to our "inalienable right” be the pursuit of happiness?]
* * *
Thanks Jason. Could posthumans have billions of emotions instead of our "core" half dozen? With genetic engineering, I suspect so. Spirituality? We can go further. Neuroscanning technologies can identify the molecular signature(s) of spiritual experience - and then use genetic engineering to amplify and overexpress its substrates. Posthumans can enjoy hyperspiritual states far richer than anything physiologically accessible today every moment of their quasi-eternal lives.
Or so runs the pitch of my imminent grant application to the Templeton Foundation
* * *
When is one ethically entitled to harm another sentient being? We all know life is messy and complicated. Some moral dilemmas have no easy solution. Sometimes, acting ethically calls for heroic self-sacrifice that few can manage. But if we consider the horrors of factory farming, there is no good argument why one shouldn't quit eating meat altogether - and plead with one's entire circle of acquaintance to do the same. "But I like the taste!" must count among the most feeble and weak-minded excuses for cruelty I can imagine.
Eliza, there is indeed a sense in which by harming others, one is also harming oneself. Although I'm a sceptic about any form of personal identity over time, there is an opposing view:
* * *
Tim, intuitively you're right. And I don't rule out that e.g. fourth-millennium nonbiological quantum computers will be sentient. But for several centuries, at least, I suspect on theoretical grounds that only biological neural networks [perhaps with neuroelectronic interfaces] will be nontrivially sentient. This is because of the classically insoluble phenomenal binding problem. (cf. The view that only macroscopic quantum coherence offers a potential solution to the phenomenal binding problem is controversial. But if it's viable, then neither classical digital computers nor classically parallel connectionist systems will ever support unitary minds - nor ever be nontrivially conscious. So much for the prospects of imminent full-spectrum superintelligence. (cf.
IMO our minds have been quantum computers long before they acquired a neocortex:
* * *
Asparagus? Plant cells are encased in cellulose cell walls whose structure effectively rules out computationally useful multicellular quantum coherence (but see too: "Unusual quantum effect discovered in earliest stages of photosynthesis"). This would hold even if organisms without the capacity for rapid self-propelled motion could evolve anything analogous to an energetically expensive organic like the brain. So I think asparagus-eaters can sleep easy at night...
"Making a living off"?! Jay, if I were paid for philosophising, I could probably invest in a comb! Timescales? Well, technically at least, prospective parents could use preimplantation genetic diagnosis even today to choose benign alleles of the COMT gene (enhanced reward sensitivity) and the SCN9A gene (high pain thresholds) for their future children. The burden of suffering in the world would thereby significantly be reduced. The biggest obstacles to phasing out the biology of suffering aren't technical, or even ethical/ideological, but simply status quo bias.
"Who would question the benefit of the thesis"? Well, if and when life is animated by gradients of intelligent bliss, no one at all! Alas, in the meantime there are many obstacles to overcome.
Jay, if humans made babies asexually via clonal reproduction, then "monkeying" with our genome would indeed be a bold and risky genetic experiment. In reality, sexual reproduction means that every child is a unique genetic experiment. The consequence of such genetic experimentation is immense suffering. Natural selection does not care about the welfare of the individual.
Buddhism? I'm an admirer of Buddhist ethics. Buddhists recognise the overriding moral urgency of overcoming suffering. Alas following the Noble Eightfold Path does not recalibrate the hedonic treadmill - nor the cruel web of negative feedback mechanisms in the mind-brain that ensure humans are malaise-ridden and discontented for much of our lives. Nor can following the Noble Eightfold Path dismantle the horrors of the food chain. Compassionate stewardship of Nature (in tomorrow's wildlife parks) will entail some kind of high-tech Jainism.
Overcoming anhedonia? Yes!
* * *
El Marte. Yes, your point about empathy is critical. Phasing out the biology of suffering worldwide will entail enriching our empathetic understanding of other sentient beings. Not least, humans must overcome the profound cognitive deficits in perspective-taking capacity that underlie the horrors of factory-farming and eating meat. I suspect posthumans will regard their ancestors (i.e. us) as little better than simple-minded cannibals.
Fortunately, the options of mood-enhancement and empathy-enrichment are not mutually exclusive. One (fanciful?) possibility would involve widespread use of long-acting empathetic euphoriants, for example safe and sustainable analogues of MDMA (Ecstasy). But drugs are at most a stopgap. I think the development of full-spectrum superintelligence will entail genetic enrichment of our capacity for empathetic understanding, augmented by the "naturalised telepathy" of tomorrow's information-technology. Today, empathy typically entails sharing each other's miseries. Posthuman empathy will most likely entail sharing others' pleasures.
However, the easiest way to reduce suffering in the world right now doesn't involve empathetic superintelligence, heroic personal self-sacrifice, or hi-tech genetic engineering. Rather the first step involves giving up eating meat - and urging everyone else to do likewise. Factory farming is the worst source of severe, chronic and readily avoidable suffering in the world today.
Thanks El Marte. Actually, I hesitated before adding the link to empathogens for fear of suggesting that adopting a cruelty-free diet requires superhuman powers of perspective-taking prowess - as distinct from a willingness to tolerate (very) mild inconvenience. However, in a more speculative vein, I do think that future full-spectrum superintelligence will call for a far richer capacity for empathetic understanding than humans ever manage today
* * *
Jay, I understand where you're coming from. Yet why "embrace and permeate" suffering when we could abolish it instead? Once we understand its molecular signature(s), then we can phase it out altogether - just as we've abolished smallpox. Pitfalls? You bet. Unanticipated side-effects? Almost certainly. But if we believe in a world without suffering, then the problem is technically soluble through science. Often "apathy" is really masked depression. One might imagine that the happiest people would be least motivated to do anything - and so radically raising hedonic set-points would turn us into a civilisation of lotus-eaters. But empirically, this doesn't seem to be the case. Indeed, boosting mesolimbic dopamine function instils a feeling of urgency - a sense of things-to-be-done. I suspect posthumans will be hypermotivated compared to their indolent human ancestors.
Jay, I think we'll want to draw a distinction between a lack of desire born of apathy and a lack of desire born of contentment. Apathy and boredom are not pleasant - although generally they don't amount to full-blown suffering. Either way, mastery of our reward circuitry should shortly allow both our level of motivation (crudely, mesolimbic dopamine function) and hedonic tone (crudely, mu opioid function in our twin "hedonic hotspots") to be modulated independently.
What's the optimal mix of bliss and motivation? Well, in the future we should all be free to choose.
[on transhumanism in San Francisco]
What should be our greatest priority?
Humanity Plus Conference 2012
Talk of our glorious feature can seem almost cruel if the listener thinks s/he won't be around to enjoy it. So yes, let's fund radical antiaging research. Intelligence-amplification is clearly hugely desirable. The biggest pitfall, I think, is a one-dimensional conception of what "intelligence" entails. What exactly are we trying to amplify? And what, if any, are the trade-offs?
Even those of us sceptical of the very idea of personal identity in practice oscillate between assuming it's real and disavowing its existence. Like post-Everett quantum mechanics, Buddhist / ultra-Partfitian views on personal (non)identity are hard fully to internalise even if one nominally signs up to them. As you know, I think the existence of suffering in both human and nonhuman animals is our greatest ethical challenge. But this doesn't necessarily mean that the biology of suffering is best tackled by a full-frontal assault. If you think e.g. we're on the brink of a Technological Singularity and nonbiological SuperIntelligence, then you'll have a very different agenda than if you believe getting rid of suffering is an obligation falls squarely on biological humans...
* * *
If we think that humans - or rather our transhuman and posthuman successors - will have a responsibility for stewardship of our entire Hubble volume, then (even within a classical utilitarian framework) it makes sense not to aim for maximum bliss here on Earth asap. But I think we're pretty safe phasing out the biology of involuntary suffering, most of which by almost any lights is just futile and nasty.
* * *
From Crank Alley to scientific mainstream...
("Scientists say they're close to unlocking the secrets of immortality")
A case can certainly be made that the more exciting the message, the more "boring", i.e. sober, should be the presentation. Unlike, say, Randal Koene, who'll be speaking, I'm personally a sceptic about mind uploading, substrate-independent minds and non-trivial digital sentience. But most researchers would probably regard my reasons for scepticism as weirder than the prospect of software-based minds they purport to question.
* * *
Rauri, yes, on some estimates free-living nonhuman animal suffering does indeed exceed man-made nonhuman animal suffering, i.e. factory farming. Uncontrolled habitat destruction by humans is probably preventing more free-living animal suffering than any compassionate interventions we could make now. Before we can contemplate implementing high-tech Jainism in the rest of the living world, IMO we're going to need to persuade people to stop paying for the horrific suffering for which humans are directly responsible. Sadly, I'm not convinced we can close factory farms and the death factories until the commercialisation of in vitro meat in a decade or two.
“Our job now is to prepare the grounds for forthcoming generations to take action where we may be currently unable to act” (Oscar Horta)
Ruari, absolutely! That's why I've long gone on about such "crazy" stuff as reprogramming predators even though I know some of my colleagues are rolling their eyes. But it's going to be a long journey ahead. IMO you're doing a fantastic job helping to spread the word.
* * *
("Can a Jellyfish Unlock the Secret of Immortality?")
I suspect senior dogs will receive radical antiaging therapies almost as soon as senior humans...
("Anipryl® - Help for Senior Dogs?")
* * *
A PDF of my talk, based on:
Will humanity's successors be our descendants?
* * *
Indeed so Luke. Of course, if one has a bad toothache, the greatest priority in the world is obvious. To me it's always seemed obvious that fixing the biology of suffering is most urgent. But a case can be made for ending ageing, intelligence-amplification / a friendly Singularity - and existential risk. A complication is that one can agree that an issue is of supreme importance without knowing how one can make a difference. Thankfully, the goals I mentioned are mutually consistent - one reason we can talk of a transhumanist movement, despite our disparate priorities.
* * *
Excellent Roberto. I suspect posthumans will regard archaic humans as little better than cannibals. But civilisation is (slowly) spreading...
* * *
The plot thickens...
("Singularity University Acquires the Singularity Summit")
* * *
Could we just be unwitting tools of Satan?
("Transhumanism Agenda Is Satan’s Counterfeit Ye Shall Be As Gods")
* * *
Brandon, field studies lead me to suspect your conception of this world may run closer to:
("The Garden of Earthly Delights - Hieronymus Bosch -")
eanne Calment's record of 122 years 164 days will probably extend in the early 2030s
One of the (very) few half-decent arguments against banishing aging without redesigning human nature is that in future totalitarian dictatorships might last centuries rather than decades.
* * *
A test for the existence of the Devil?
("Do we live in a computer simulation? Researchers say idea can be tested") Homo sapiens?
("Human hands evolved so we could punch each other")
* * *
("Legged Squad Support System (LS3): DARPA's four-legged robot with voice recognition")
* * *
Perhaps we'll all be transhumanists soon...
("US Spy Agency Predicts a Very Transhuman Future by 2030)
* * *
Brian, the Pope warns against the "manipulation of Nature":
("Pope Benedict denounces gay marriage during his annual Christmas message")
Claiming Gay marriage is a source of existential risk for mankind suggests he may be losing the plot.
* * *
("Why Making Robots Is So Darn Hard")
I predict we're heading for ultrapowerful but not strong AI.
* * *
Brian, yes, in some ways Teilhard prefigured Singularitarianism - though without the (wholly speculative) concept of recursively self-improving software-based digital minds:
Indeed so Brian [Tomasik]. If anyone can carry off a magisterial synthesis of Roman Catholicism and paradise engineering, you're the man for the job.
* * *
Or will the shock send them to an early grave...?
("Roboy, the robotic 'boy' set to help humans with everyday tasks")
* * *
But "for the animals it is an eternal Treblinka.”
(Isaac Bashevis Singer)
("Apocalypse... but not as we know it")
* * *
I would give my other half a heart attack. Of course it's good to see one's quirks of character in perspective. If you think you've got problems... ("The real-life sleeping beauty, 17, who has illness which makes her belt out songs while snoozing for 12 DAYS at a time")
* * *
"The degree and kind of a man's sexuality reaches up into the topmost summit of his spirit"
("Alan Turing in three words")
* * *
In future perhaps retractable (cross-species?) thalamic bridges could help repair our profound ignorance of other first-person perspectives:
("Could conjoined twins share a mind?")
* * *
Greg, yes, fair point. The Transhumanist Declaration (1998; 2009) is egalitarian: it expresses our commitment to the well-being of all sentience.
Alas we don't always live up to this splendid aspiration; but it's an admirably impartial statement of our values.
I could be complacent and say that the cost of any information-based technology tends to zero - and this will include the information-based technology need to deliver superhappiness, superlongevity and superintelligence. But it's hard to deny there will be a lag - even if lag-times are shrinking.
Also, we shouldn't neglect the role of competitive displays of altruism:
("Billionaires Club: Buffett and Gates Want Them to Give More")
Perhaps the current of transhumanism most focused on social justice and democratic accountability is the Institute for Ethics and Emerging Technologies (IEET):
* * *
Scope for enrichment and overexpression...
("Found: Altruism Brain Cells")
[on transhumanism in Sweden]
Time to rediscover my Viking roots - Valkyries welcome.
Humanity+ Lund Event: Lecture by David Pearce
Christian, even if one takes seriously, as I do, the idea that the pain-pleasure axis discloses the world's inbuilt metric of (dis)value, the ontological status of (dis)value is hard to understand.
("Ethics: Inventing Right and Wrong" by J. L. Mackie")
* * *
A venerable tradition, for sure:
I would like a rigorous derivation of God's utility function, so to speak.
I suspect Reality is ultimately explained by some logico-mathematical principle - perhaps an informationless Zero Ontology -rather than a divine being. Even so, we may still ask what a benevolent deity might want us to do had He existed.
"God created the integers," said mathematician Leopold Kronecker, "All the rest is the work of Man." But I fear God might have only limited discretion in such matters.
* * *
But should we try a new menu?
("Concern over 'souped up' human race")
* * *
The formula needs tweaking....
("Relax, Girl: Boyfriend's 'Love Hormone' Wards Off Your Rivals")
* * *
Can we design organic robots with a capacity for nociception without pain?
("Ashlyn Blocker, the Girl Who Feels No Pain ")
* * *
Do you really love Big Brother?
("Mind-reading scan locates site of meaning in the brain")
* * *
Alternatively, do biological robots have much hope of a future?
("Noam Chomsky on Where Artificial Intelligence Went Wrong")
* * *
Alas our ignorance of the brain is hard to overstate IMO. When will we understand why we're not zombies? ("Head in the Cloud")
* * *
Materialist neuroscience offers powerful reasons to believe conscious mind is impossible:
("Ray Kurzweil's Dubious New Theory of Mind")
* * *
A dramatic reduction in existential risk is the strongest argument I know for this kind of venture:
("SpaceX Billionaire Elon Musk Wants A Martian Colony Of 80,000 People")
* * *
(Bertrand Russell)
("Claude S. Fischer: "Happiness Policy")
Alas 90 minutes of pure magic can make the rest of one's life pall in comparison: 4
("MDMA keeps severe stress at bay")
* * *
I'm sceptical the legal status of MDMA will change any time soon. Our drug laws can trip up even the most innocent...
MDMA is indeed an acute remedy for hayfever, although not without side-effects.
("BBC - Mark Easton's UK: Ecstasy risks")
[on the future of pain]
In the post-genomic era, no law of nature says biological robots must suffer pain.
Should We Eliminate the Human Capacity to Feel Pain?
"Reason may cure illusions, but not suffering."
(Alfred de Musset)
Can science abolish pain?
The first comment I read said "Pain is a great thing". Some ideologically unsound thoughts sprang to mind.
See too:
Perhaps the futuristic (i.e. phasing out pain altogether) mingles too closely with the immediate and practical, e.g. choosing benign "low pain" alleles of the SCN9A gene for our future offspring. But I just answered the questions that were fired at me...
Claudio, there are indeed ways to extinguish desire, e.g. opioid drugs and (I am told) Buddhist meditation. But technically at least one can reduce or abolish pain and amplify desire. Enhanced mesolimbic dopamine function, for example, is associated both with heightened motivation and reduced pain-sensitivity.
Stirling, I certainly agree on the need for caution. Choosing benign "low pain" alleles of SCN9A, for example, is much more prudent than choosing nonsense alleles even if in future we have smart prostheses to protect us.
Ben, I agree. Phasing out "mental" pain is no less important than phasing out physical pain - which is really just a subclass of the former. Once again, we need to consider the signalling role of bad feelings. Some emotions e.g. feelings of jealousy, could well be consigned to the dustbin of history. They were genetically adaptive on the African savannah. I can't see the need to preserve even their functional analogues.
But what about, say, grief?
Well, if I were die or suffer misfortune, then I might [selfishly?] want this fact to diminish the well-being of people I care about. But am I entitled to go further and want anyone to suffer on my account?
No - in my opinion.
Mahal, indeed. I think enriching our capacity for empathetic understanding is vital to the growth of full-spectrum superintelligence. If we opt to phase out the biology of suffering, then in future we can empathise with each other's joys rather than (as so often today) each other's sorrows.
Chris, I'm actually quite sceptical our more civilised successors will "expound on the greatness of their achievement" in phasing out [involuntary] suffering. Rather they'll simply take it for granted. Compare, say, pain-free surgery. For a short while in the mid-19th century its introduction was hailed as a medical miracle. Now we take it for granted.
Chris, John, do you believe some functions are computationally intractable without "raw feels"? This may be the case. But you will have achieved a notable success if you can show this is so. For now at least, my working assumption is that the functional role currently performed by our nastier core emotions can be replicated either in silico or by hedonic recalibration. Would you disagree?
* * *
What it's like to be a cyborg?
("I listen to color")
* * *
I'm sceptical the world contains any such objects as brains - at least as understood by the materialist. Belief in cheesy wet lumps of congealed porridge that generate consciousness is the product of naive realist story of perception. I do think the world is populated by entities functionally isomorphic to what we call brains and nerve cells; but that's different.
In the interview, I ignored philosophical issues about the metaphysics of mind, the Hard Problem of Consciousness, and Levine's Explanatory Gap. This is because we don't want anyone to feel commitment to phasing out suffering involves signing up to anyone's idiosyncratic metaphysics [in my case, basically a combination of Strawsonian physicalism and ultra-rapid macroscopic quantum coherent states optimised by hundreds of millions of years of evolution to track the fitness-relevant patterns in the mind-independent world.]
Jamie, in essence, yes. I think the term "observation" is systematically misleading. It suggests we perceive our surroundings, whereas all these surroundings can do is select from states of one's own mind. But I think tomorrow's mathematical physics exhaustively describes the structure of the world. The solutions to the master equation of quantum mechanics yield the values of fields of microqualia: the "fire" in the equations. However, such Strawsonian physicalism doesn't, by itself, explain why we're not quasi-zombies, mere structured aggregates of "mind dust". We still need to solve the binding problem and explain the (fleeting, synchronic) unity of consciousness. I predict experimental apparatus sensitive enough to detect quantum coherence in macroscopic objects on sub-femtosecond(?) timescales would detect, not merely "noise", but richly structured quantum coherent states - states isomorphic to the macroqualia making up the egocentric virtual worlds of our daily experience.
This is of course conjecture. But if quantum mechanics is complete, then the existence of such macroscopic quantum coherent states in the CNS is not in question, merely whether they have been recruited to do any computationally useful work. Max Tegmark, for instance, would say no. I disagree. Our minds have been quantum computers for the past half-billion years.
* * *
Alas the key to the plot is still missing:
("Special issue: What is reality?" - New Scientist)
* * *
Does your brain have a mind of its own?
("You're far less in control of your brain than you think, study find")
* * *
Emphasis on parental choice and responsibility seems prudent. Screening for ghastly conditions like paroxysmal extreme pain disorder (PEPD)
could certainly justify testing which SCN9A allele you pass on to your future child. But how low does someone's pain threshold have to be before it's categorised as pathological? Like depression, pain-sensitivity is dimensional rather than categorical. I suspect by civilised posthuman criteria almost all of us have multiple genetic disorders.
* * *
Should we prefer psychosis to depression?
("Feeling Down? Spirituality Can Boost Your Mood")
* * *
Compared to the horrors of severe pain, pinpricks etc are of negligible significance. But if one is a strict classical utilitarian, than pinpricks are still opportunities forgone for bodily bliss - and hence should ultimately be replaced.
* * *
Eugenicists wanting to breed super-Einsteins face a tough challenge...
("Study Finds A New Look at Genetic Factors in Intelligence Needed")
Yes, the only countries without a height differential between social classes are Scandinavian. The apparent plateau to the Flynn effect in northern Europe would indeed seem to confirm the nutritional hypothesis.
Thanks Mike. The enhanced cognitive ability conferred by an extra copy of the NR2B gene also increases susceptibility to persistent pain. So at the very least, I'd also want to make sure I had a benign allele of pain-threshold modulating SCN9A. More generally, I suspect it's no coincidence that the ethnic group whose members score almost a standard deviation above the IQ global mean also records the highest incidence of Aspergers. (cf. "Natural History of Ashkenazi Intelligence": Conversely, I suspect it's no coincidence that the ethnic group whose members score almost a standard deviation below the global mean also records the lowest incidence of Aspergers. So I worry that focus on boosting intelligence in the narrow "autistic" sense measured by mind-blind IQ tests would have numerous adverse side effects as well as benefits. My focus, i.e. phasing out, or recalibrating, the worst of physical pain and our nastier core emotions should be technically easier than radical intelligence enhancement; but reducing the burden of suffering in the world will presumably have all sorts of unanticipated ramifications too.
I wonder if any experiment has ever been conducted into whether any sort of relationship exists between pain-sensitivity and IQ score. [high AQ men predominate at the highest end of the IQ scale. Testosterone is a potent painkiller. Anecdotally, at least, many Aspergers have unusually high pain thresholds (cf. the "extreme male brain" theory of autism spectrum disorder)
* * *
Alas the Noble Eightfold Path is not enough:
The Technological Abolition of Pain
by Ben Goertzel
(Henry David Thoreau)
Kyle, it's hard to argue against pure incredulity. Yet from a technical perspective, at least, it's feasible to phase out pain, suffering, and all experience below hedonic zero. Indeed from an engineering perspective, we could design lives animated entirely by gradients of intelligent bliss orders of magnitude richer than anything physiologically accessible today.
Bears? Well, few of us dwell amongst them. But I know a lot of people enjoy watching wildlife documentaries. And there is no reason why genetically and behaviourally tweaked bears can't roam tomorrow's wildlife parks. But is a primitive world where sentient beings are disembowelled and eaten alive really preferable to its post-Darwinian successors?
Gabriel, I'd be mildly surprised if anything remotely as advanced as an amoeba exists within our Hubble volume:
But whether our worry is visiting bug-eyed monsters from Betelgeuse, or their male human primate counterparts closer to home, there is one big advantage to recalibrating the hedonic treadmill rather than inducing uniform bliss. Recalibrating your hedonic set-point can leave motivation, preference architecture, and informational sensitivity to positive and negative stimuli intact. Indeed the more one loves life, the keener one is likely to be to preserve it. Life can still be exhilarating without pain, fear and suffering. Indeed if you crave excitement, then everyday future life can be far more exhilarating than the upper bounds of excitement physically feasible today.
Let's grant that some of our greatest musical and aesthetic experiences on a scale of 1 to 10 have been born out of deep suffering. If you could taste musical and aesthetic bliss on a scale of 90 to 100, would you want to revert to the mediocrity of the past?
Gabriel, sadly we know that perpetual, sustained misery is feasible. For evolutionary reasons, lifelong bliss is rarer. But have we any evidence that its molecular machinery is harder for evolution to engineer? Or that in posthuman paradise, something valuable will be lacking, namely pain, misery and malaise?
Nociception should be civilised, not abolished.
("Congenital analgesia: The agony of feeling no pain")
At times I pine for something a little more lively...
Gabriel, one of the many advantages of recalibrating the hedonic treadmill is how an elevated hedonic set-point can promote active citizenship rather than resigned passivity.
Perhaps see
("Subordination and Defeat : An Evolutionary Approach to Mood Disorders and Their Therapy")
Gabriel, alas we have a pretty good idea already how to depress a human or nonhuman animal's typical hedonic set-point, namely to subject him or her to chronic uncontrolled stress. (Controllable stress is different). Raising someone's hedonic set-point above his or her genetically constrained ceiling is more of a challenge. But already, if we wanted to, we could use preimplantation genetic diagnosis to select "happy genes" (cf. ) for our future children. And shortly we'll be able to edit our genetic and epigenetic source code and recalibrate our own reward circuitry too. (cf.
Alas evidence of any positive correlation between feelings of guilt and depravity is quite weak...
("Brain scans prove Freud right: Guilt plays key role in depression")
Ignorance is indeed a major source of suffering Jean. It's also a major source of happiness. Although I'm not convinced that churning out logical inferences has more than a passing role in the future of sentience, I agree with you that becoming both smarter and wiser is essential if we're to navigate this critical century in the history of life.
Scientific reason will lead us to superhappiness? Yes, I think so. For a counter-argument, perhaps see
General purpose intelligence: arguing the Orthogonality thesis
Depressive realism is toxic:
("How to live beyond 100")
[on posthuman superintelligence]
What is your conception of greater-than-human intelligence?
An Organic Singularity?
"Could the Organic Singularity Occur Prior to Kurzweil's Technological Singularity?"
A wide diversity of opinions is represented in the forthcoming Springer volume:
My money is still on organic superintelligence (cf. But the dismal track record of futurology is sobering.
Can one possess, say, greater-than-human visual intelligence without the capacity to experience phenomenal colour - or any visual experience at all? Can one possess a greater-than-human capacity for introspective self-understanding without being a unitary subject of experience? I worry the whole field of AI is shot through with fallacies of equivocation.
You can functionally interconnect a bunch of classical objects any way at all; yet they can never be anything but the sum of their parts - and behave accordingly. Neurons are normally conceived as classical objects, at least for the purposes of computation. But if we're not in a coma or a dreamless sleep, organic minds are not mere speckles of classical "mind-dust". Rather we run real-time, cross-modally matched phenomenal world-simulations that are more powerful and faster than Pentagon supercomputers. And soon we're going to edit our own genetic source code and bootstrap our way to full-spectrum superintelligence. On one story, at any rate.
Presumably amplified visual intelligence can take us beyond
("An unknown number of women may perceive millions of colors invisible to the rest of us.")
At the most basic level, I wonder just how many phenomenal colours exist?
How a fundamentally quantum mechanical universe gives rise to quasi-classical macroscopic "worlds" is a very deep question. (I'll be very interested to hear what you think of David Wallace's "The Emergent Multiverse". Few writers combine an equal level of scientific and philosophical sophistication. Some of the text is available online:
Even if one believes (as I do) that post-Cambrian organic minds have been quantum computers for hundreds of millions of years, this status doesn't mean they are computationally adapted to model the distinctively quantum mechanical features of the mind-independent world. On the contrary: our world-simulations are quasi-classical in content (just not in mechanism). The late evolutionary novelty of serial linguistic may be conceived as a quasi-classical virtual machine that needs shielding from all manner of "noise".
Dustin, you say: "Quantum mechanics does not specify, from the outset, the "rules" (going from lower to higher levels of organization) of molecular bonding, biochemistry, physiology, psychology, or sociology." Is this really the case? Couldn't a (notional!) God-like intellect just "read off" these effective levels of organisation from the universal Schrödinger equation (or its relativistic generalisation) and its solutions, just as such a mega-intellect could instantly divine, say, the properties of the natural numbers given Peano's Axioms?
Dustin, Why do you describe other quasi-classical branches as "metaphysical"? Yes, there is a sense in which everything beyond solipsism-of-the-here-and-now is metaphysical insofar it transcends the available empirical (experiential) evidence. But interference effects between quasi-classical Everett branches that have decohered ("split") never wholly disappear: they can be precisely quantified with decoherence functionals, and detected whenever our measuring apparatus is sufficiently sensitive. So yes, I'm a metaphysical realist about everything from dinosaurs to DNA to classically inequivalent Everett branches. By contrast, anti-realism about our best theory of the world turns the success of science into a miracle.
There is no such thing as quantum chaos in the sense of hypersensitivity to initial conditions:
And whether our world is formally described by a finite- or infinite-dimensional Hilbert space is an open question. As you know, I'm a finitist in maths and physics.
Dustin, perhaps I should have chosen a less controversial area than the foundations of mathematics. Would you agree that in specifying the rules of chess, one has implicitly specified all possible 10120 odd chess games? The case of physics is somewhat different because the master equation of the presumed TOE is elusive. But let us assume we have a suitable candidate. Is there any "element of reality", as Einstein put it, not captured in the formalism? World War Two, inflation and the miniskirt are not explicitly represented, or at least not to mortal eyes. Nor, more alarmingly, is anything resembling quasi-classical macroscopic world: just an immense quantum superposition. Quasi-classical classical worlds are an emergent feature. Likewise, in the case of life-supporting Everett branches, their different levels of description: chemistry, biology, ecosystems, etc. But this "emergence" is philosophically benign. Reality has only one ontological level as captured by the formalism of the TOE. There could not really exist a superbeing outside the multiverse who could "read off" its properties from the solutions to this master quantum mechanical equation. Nonetheless there are no "hidden variables" to invoke. Or are you arguing for some kind of dynamical collapse theory? Or antirealism?
I look forward to reading your paper Jonathan. Contra Penrose, Hameroff, and Kauffman, however, I don't think quantum mechanics can explain how a nonsentient world could give rise to consciousness. Rather if we assume a pain-experientialist Strawsonian physicalism, then quantum mechanics can explain classically impossible properties of organic minds, not least phenomenal object binding and the (fleeting) synchronic unity of the self.
* * *
Gastric intelligence?
("The second brain in our stomachs")
* * *
More zombie boosterism:
("Humans' Not-so Singular Status")
* * *
"Are Aerobics Trophic for Cognition in Late Life?
One intuitively feels that time spent huffing and puffing could more fruitfully be spent elsewhere. On the other hand, the cognitive benefits of regular aerobic exercise, not least hippocampal neurogenesis, suggest the investment of time and energy is worthwhile.
Some of us dice with death on a daily basis:
("Less time sitting 'extends life'")
I guess there are couch potatoes and active fidgeters:
("Fidgeting Helps Separate the Lean From the Obese, Study Finds")
Think hard and stay slim? Not exactly...
("Does Thinking Really Hard Burn More Calories?")
* * *
Human conceptions of "intelligence" suffer from a host of concealed value judgements about what is cognitively important and what is cognitively trivial. "IQ tests" are themselves designed by people (overwhelmingly male) with high AQ scores who devalue social cognition. Traditional tests reveal at least as much about the minds of those who devise them (i.e. high AQ male hyper-systematisers uninterested in social cognition) as their testees. Full-spectrum superintelligence will be incomparably richer.
However, the design of a hyper-empathetic, cruelty-free world may depend on a hypersystematising cognitive style that is alien to most naturally empathetic and compassionate people. Utilitarians, for example, are almost always men.
I suspect there is a strong positive correlation between extremely high IQ and AQ score
and an even stronger positive correlation between propensity to design mind-blind IQ tests and AQ score; but I don't know if this conjecture has ever been put to the test.
("IQ tests: women score higher than men: Women have scored higher than men in intelligence testing for the first time since records began.")
(Dr Paul Irwing: 'There are twice as many men as women with an IQ of 120-plus')
So long as "IQ tests" autistically exclude social cognition, this finding will persist IMO.
* * *
I'm torn Jose. I'd love us all to be more "naturally" empathetic. But the technology systematically to eradicate suffering, aging and disease is mostly going to be developed by high AQ folk - and likewise the willingness systematically to use it.
None of us can really empathise with more than one other sentient being at a time. Here is an extreme example of a hyper-empathising cognitive style at work. I have a female friend who dotes on her cat - and on any mauled mouse she finds in its wake. [In fact, her only real criticism of me is my inability to appreciate her cat's inner Buddha nature.] But at no time does she recognise that systematic compassion calls for a radically different approach. Thus almost all utilitarians are male - utilitarianism being the ultimate systematisers' ethic. High AQ folk tend to be hyper-systematisers: the originator of the felicific calculus, Bentham, was himself almost certainly an Asperger.
Greater-than-human intelligence...?
("Think your children are bird-brains? You're right - our feathered friends outperform seven-year-olds in logic tests")
* * *
Less Wrong or Not Even Wrong?
("Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set
It's the end of the world as we know it, and they feel fine.")
* * *
Christian, some might say you're a harmonious fusion of ying and yang.
(Actually, some people really do find it comparatively easy to switch cognitive style as appropriate. But what percentage of the world's greatest mathematicians, for example, have sub-Aspergerish AQ scores?)
"The Smartest Man in the World?"
What would it mean to say Chris Langan - or indeed Ed Witten - is smarter than Alexander Shulgin??
Kadu, versatility is indeed critical to general intelligence. Perhaps the greatest versatility of all, however, is the capacity of organic robots to explore both the formal and the subjective properties of mind - the "program-resistant" qualia of which digital computers are invincibly ignorant.
The bandwagon keeps rolling...
("Incognito Supercomputers and the Singularity")
("The more gray matter you have, the more altruistic you are")
But not yet as sentient as an earthworm...
("I, Robot)
Can eternal youth be genetically preprogrammed?
("DNA race to unlock ageing secrets")
* * *
Could each of our neurons soon be net-enabled?
("The human body could soon be connected to the web says 'father of the internet' Vint Cerf")
* * *
For lovers of invective... Evgeny Morozov: The Naked And The TED | The New Republic
('Hybrid Reality: Thriving in the Emerging Human-Technology Civilization' By Parag Khanna and Ayesha Khanna)
* * *
Now we just need user-friendly editing tools...
("Identically Different: Why You Can Change Your Genes by Tim Spector – review")
Sentient beings should not be created via a genetic crapshoot.
("The Hastings Center - Prenatal Whole Genome Sequencing: Just Because We Can, Should We?")
* * *
Luke Muehlhauser, CEO of the Singularity Institute [now MIRI], is currently doing an AMA on Reddit:
* * *
Moral bioenhancement? Sadly we may first need to agree on ethics before wiring virtous predispositions:
("Genetically engineering 'ethical' babies is a moral obligation, says Oxford professor")
* * *
A recipe for digital zombies...
Cyborgs? that might be stretching it a bit...
("Grinders: the cult of the man machine")
Would a benevolent AI turn humans into utilitronium?
("Will Artificial Intelligence Turn Evil & Against Humans?")
The mind/brain is not a digital computer... (Self-awareness in humans is more complex, diffuse than previously thought")
Ultimate super-intelligence or an invincibly ignorant zombie?
Full-spectrum superintelligence entails mastering both the formal and the subjective properties of mind, i.e. Turing and Shulgin.
There are strong theoretical grounds for doubting a classical digital computer will ever be non-trivially conscious, let alone support a unitary phenomenal self who could do the research.
* * *
High tech Jainism?
("How Do We Care For Future People?? Buddhist and Jain Ideas for Reproductive Ethics")
An authoritative history of transhumanism has yet to be written...
("Better Than Human. The Transhumanist Transition to a Technological Future")
Farewell to organic professors?
("Robot Professors Come With Singularity University’s Massive Upgrade")
("Merging the biological, electronic")
* * *
Superintelligence imagined by someone with, say, autistic spectrum disorder differs from superintelligence imagined by someone with, say, mirror-touch synaesthesia. Unlike high AQ systematisers, folk with low AQ would be most unlikely to design an IQ test. But if they did so, then the subtypes of social cognitive ability they tested for would yield a picture of high intelligence far removed from the reigning orthodoxy in the IQ testing industry today. Of course this outcome would be biased, just as today's autistic tests are biased. Subjective value-judgements of (un)importance are inescapable.
Each of us here could design "IQ" tests. They would each have strengths and weaknesses. Some would deliver outcomes more-or-less congruent with existing tests; others would be wildly different. Some would have a measure of ecological validity; others would be wholly artificial. Who, if anyone, would be "correct"? I'm not sure this question has a factual answer.
Andres, I think I can say without fear of contradiction that you have a highly unusual mind. I agree with a lot of your points. However...
Let's say we want to design a less simple-minded IQ test. We want the revised test to be culture-, race-, species-neutral; and to have maximum ecological validity. For a start, I'd give high weight to cognitive prowess in what most agents regards as the immensely important and cognitively demanding challenge of finding reproductive opportunities / prospective mates ("sexual intelligence"). I'd give heavy weight to "mind-reading" prowess, i.e. the perspective-taking capacity that promotes cooperative-problem solving and helped drive the evolution of distinctively human intelligence. None of this is scored by existing tests. Including sexual and social intelligence would involve outcomes quite radically different from the standard, "mind blind" Cattell–Horn–Carroll framework that dominates the IQ testing industry today.
Now a critic might respond that such a revision discriminates against Aspergers and high AQ "geeks". But for that matter, the revised test discriminates against celibate philosophers too (though I'd never claim to be more than an idiot savant in any case!) The point is that there is no objective fact of the matter about what is "true" intelligence. All "IQ tests" involve a hotchpotch of disguised value-judgements about what abilities are - and which aren't - important.
* * *
Futurology by extrapolation has a poor track record:
("What Is the Future of Computers")
The only "autonomous lethal robots" that scare me are human:
("The Future of Moral Robots")
Did he mention he was the Son of God?
("Jesus's wife? Scholar announces existence of a new early Christian gospel from Egypt")
Genius or idiot savant?
("One Per Cent: Watson, the supercomputer genius, heads for the cloud")
The prospect of biological robots rewriting their own source code and bootstrapping their way to superintelligence may be nearer than we suppose...
("Custom gene editing rewrites zebrafish DNA")
("Mimicry beats consciousness in gaming's Turing test")
("The End of the Beginning: Life, Society and Economy on the Brink of the Singularity")
Ben Goertzel responds to David Deutsche:
("The real reasons we don’t have AGI yet")
("The Consequences of Machine Intelligence")
Alexander Kruel on why we should not lose sleep over Nonfriendly AI:
("Why I Am Skeptical About Risks From AI")
("Are Humans Getting Smarter or Dumber?")
("Stephen Hsu on Cognitive Genomics")
What is genius?
* * *
To say "I don't know" sounds lame. Reifying one's ignorance and calling it "The Singularity" sounds impressive. And what could be more exactly than a "singularity? As you know, I think posthumans minds will owe at least as much to Shulgin as Turing.
Full-spectrum superintelligence entails mastery of both the formal and subjective properties of mind; but I can't point to any impressive-looking charts to plot our progress. Alas our (non)understanding of consciousness isn't even pre-Copernican: it's pre-Socratic.
[on our heavenly or hellish future]
Utopia? Dystopia? Or Muddling Through?
Are you optimistic about the future?
Is Humanity Accelerating Towards… Apocalypse? or Utopia?
(Karl Popper)
Let's hope Sir Karl is mistaken.
Somewhat implausibly, Karl Popper can also lay claim to being the father of negative utilitarianism - a position which invited this rebuttal from RN Smart:
(Negative utilitarianism : R.N. Smart's reply to Popper)
The Epigenetics Revolution? Lars, yes indeed, though I think e.g. Nessa Carey rather overdoes the hype:
"Epigenetics Revolution: How Modern Biology Is Rewriting Our Understanding of Genetics, Disease and Inheritance"
* * *
Gabriel, you are surely correct: it's woefully simplistic simply to blame "maleness". Yet the fact remains. All nuclear weapons systems have been conceived, designed and used by men. Should we ascribe this to coincidence?
Gabriel, again I agree with you: it's simplistic to say men cause wars, just as it would be simplistic to say women cause wars because they are predisposed to mate with dominant, competitive alpha males. By analogy, it's simplistic to blame car accidents on drunk drivers. Most intoxicated drivers do not have car accidents, and most car accidents are not caused by intoxicated drivers. But high blood ethyl alcohol content is a risk factor in traffic accidents; likewise high testosterone is a risk factor when measuring propensity to aggressive war.
Sebastian, yes, I enjoyed:
("The 10,000 Year Explosion: How Civilization Accelerated Human Evolution")
Clearly, genes and culture co-evolved. Twin studies are one way to try and disentangle the strength of genetic loading for different traits. But typically such studies have all sorts of methodological problems.
Giorgio, scholarly opinion is divided. See for example the "warrior gene" controversy:
'Warrior Gene' Predicts Aggressive Behavior After Provocation
One can selectively and reversibly control one's level of MAO-A inhibition by taking e.g. moclobemide. (cf.
I've done so myself; and I can't claim it tuned me into a Viking berserker, despite my Nordic origins.
* * *
("Video: Science 'girl thing' video branded offensive")
* * *
"I figure lots of predictions is best. People will forget the ones I get wrong and marvel over the rest." (Alan Cox)
Perhaps negative utilitarians shouldn't discuss nuclear weaponry too freely lest people draw the wrong conclusions. However, I'm inclined to agree with you Jonatas. The size and nature of the thermonuclear devices necessary to wipe out biological humanity in its entirety on planet Earth probably rule out their being built. But there are an awful lot of unknowns here.
("Inside the Apocalyptic Soviet Doomsday Machine")
Although one can imagine individuals seeking to destroy the world, I find it had to envisage state actors systematically building salted thermonuclear weapons with the aim of eradicating all intelligent life. [sterilising the entire planet would be an order of magnitude (?) harder:]
* * *
We are certainly acutely aware of some of our logical stumbles. But I'd argue that we are most vividly aware of an evolutionarily ancient process that works extraordinarily well - and is completely beyond any digital computer, which is "not even stupid". What one naively calls "perceiving one's surroundings" actually entails generating "bound" and cross-modally matched experiential objects in a unitary world-simulation of a (fleetingly) unitary self - and in almost real time to boot.
At times I feel frustrated: I want to join in the excitement that's seized some sections of the futurist community over the alleged imminence of nonbiological superintelligence. Yet none of the questions that most interest me seem amenable to investigation by formal methods or digital computers. If one wants to understand and explore the manifold varieties of consciousness, then the only route I know involves the empirical methodology pioneered by Alexander Shulgin. I know some AI folk say they aren't interested in consciousness. But given it's the only phenomenon in the world to which one (sometimes) has non-inferential access, and the only reason anything matters at all, I find such incuriosity puzzling.
I guess there is quite a conceptual gulf between those who view consciousness as largely irrelevant to what's coming and those who believe it's fundamental to the future of life, mind and the universe. I wonder if the gulf can be bridged - or whether we're doomed to talk past each other?
A sentient being can understand the properties of insentient systems. But in what sense can a super-optimising system understand consciousness, or what it is to be a unitary subject of experience, or our myriad varieties of qualia? (One hesitates to use this philosophers term of art because people think you are hypothesising some occult theoretical entity, rather than alluding to the brute phenomenology of experience). Is a classical digital computer really any more than the sum of its parts? (cf. ) IMO natural biological (and in future artificial) quantum computers have at least a fleeting ontological integrity. But ascribing unitary existence or intelligence to mere aggregates simply in virtue of the patterns they exhibit is a form of anthropomorphic projection on our part. This is what I meant by saying digital computers "aren't even stupid". (I can almost physically feel Tim wincing at this point!)
Take an atom here, and atom on Jupiter, and an atom on Alpha Centauri, and call this composite "X". Does X really exist? As defined, yes. But it's an arbitrary abstract construction. Presumably we want instead to "carve Nature at the joints". (Actually, there is indeed a sense in which the entire multiverse is a single entangled object, but this topic will take us far too afield here). I'm afraid I simply can't make sense of the idea that a nonsentient system could understand, say, phenomenal redness:
or the rich and diverse phenomenology of our individual virtual worlds. To understand a phenomenon one must, at the very minimum, know what one is trying to explain, and it's precisely this explanandum of which an insentient digital computer is invincibly ignorant.
One couldn't sensibly speak of a system being intelligent, let alone superintelligent, if it were constitutionally incapable of knowing, investigating or discovering fundamental features of the natural world, e.g. the second law of thermodynamics. So could a constitutionally insentient system understand, in any sense at all, the nature of qualia? "Raw feels", by their very nature, are concrete not abstract. So unfortunately I don't think the analogy with our conception of echolocatory experience works. For sure, it's a tempting analogy in the same way it's tempting to suppose a nonsentient system could understand the nature of visual experience in virtue of the way differential physical sensitivity to electromagnetic spectral reflectancies been recruited via natural selection to play a functional role in sighted organisms. But the nature of colour qualia has nothing intrinsically to do with this contingent functional role. Likewise, echolocatory qualia have nothing intrinsically to do with sonar. As (for example) dreaming, microelectrode stimulation (etc) demonstrate, visual experiences can be elicited without their playing any kind of functional role in the informational economy of mind. For reasons we simply don't understand, the subjective textures of experience are an intrinsic property of some patterns of matter and energy. Full-spectrum (super)intelligence, at least as I define it, involves mastery of both the formal and the subjective properties of the world.
... I make a point of referring to humans and other biological animals as "organic robots" to throw the lack of specialness claimed into sharp relief. Computational universality? Only in a narrow technical sense. As Shulgin's methodology /algorithmic cookbook attests, humans can systematically set out to explore different state-spaces of qualia that are impenetrable to a digital computer: qualia are "program-resistant". Also, recall that advocates of quantum mind don't (or at least needn't) dispute that artificial nonbiological quantum computers may one day instantiate unitary conscious minds - no less than do (what they claim are) naturally evolved quantum biological minds. No privilege is claimed, we just express a different computational root-metaphor of mind, reflective of tomorrow's dominant technology rather than today's. At this point, critics will generally cite Tegmark (cf., i.e. we have no evidence that the mind is a quantum computer. But unless quantum mechanics is false, then the brain does exist in a succession of irreducible macroscopic quantum superpositions. All the critic is saying is s/he thinks such macroscopic quantum coherence is too short-lived to be computationally relevant to the coarse-grained functionalism s/he assumes. IMO the phenomenology of bound phenomenal objects and the unity of perception suggests otherwise. Whatever the cost, we need, as philosophers say, to "save the phenomena".
...Again, it's a tempting prospect (cf. "On A Distinction Between Access and Phenomenal Consciousness" by Brent Silby for a discussion on Ned Block's distinction between phenomenal consciousness and access consciousness.) But ultimately it's hopeless IMO. For a start, the vast majority of states of consciousness latent in neurally organised matter and energy haven't been recruited to play an information-processing role by natural selection. That's what makes psychedelic drugs so challenging - not that they deliver profound truths about the external world, but precisely because they don't, or rather not in the way their advocates imagine. For sure, most everyday states of consciousness we experience have been recruited for an information processing function, e.g. waking visual experience. But visual experience has nothing intrinsically to do with the mind-independent world; and its qualitative nature is something about which even a utopian digital computer is invincibly ignorant. So much for digital superintelligence: not merely can't it explode, it can't even fizzle.
* * *
Yes, given an unphysically long time, a classical computer conceived in the abstract could simulate the behaviour of a quantum computer, e.g. factor 1500 digit numbers. But do the unitary world simulations we each instantiate disclose the algorithm of a digital computer or the purely classical parallelism of a connectionist system? Or instead the world simulation of a quantum mind? Unfortunately, there is an ambiguity about the term "simulation". Is one talking about simulating just the formal properties of a system? Or both the formal and subjective properties, if any? And how are they related?
If we're perceptual resists and suppose we perceive our surroundings, then the duration of macroscopic coherence is way too short. But let's assume that all that inputs from the optic nerve (etc) can ever do is select from a finite menu of mind/brain states. I also (controversially) assume Strawsonian physicalism (cf., but let's grant here Tegmark's ultra-rapid decoherence timescales. What would it feel like to instantiate 1013 quantum coherent frames a second: a quantum computer optimised by hundreds of millions years of evolution? On this story, the binding problem is dissolved because the supposedly discrete edges, textures, colours motions distributively processed across separate regions aren't separate but can fleetingly constitute single phenomenal objects. Likewise for the unity of perception and the synchronic unity of the self. A strong prediction of this conjecture is that classical computational systems will never be conscious, or instantiate unitary world-simulations, or unitary agents. By contrast, the standard materialist story entails the Hard Problem of Consciousness, the Explanatory Gap, and the (phenomenal) Binding Problem. These are fancy names for what we might better call a falsification of materialism.
The era of mindless thoughts beckons? Or mindless "thoughts"?
("Alan Turing's legacy: how close are we to 'thinking' machines")
Jonatas, such irreducible macroscopic quantum coherent states are not just plausible but inevitable - unless we modify the quantum-mechanical formalism. To which critics of quantum mind would respond that such states occur over a vanishingly short interval, so they are computationally irrelevant in a warm, wet, noisy environment like the central nervous system. In fundamental Planck units, however, the timescales Tegmark discusses are huge. I agree with you that consciousness is wholly physical. Indeed I think its properties are exhaustively encoded by the formalism of physics. But what does being "physical" entail? Our own minds disclose the nature of the "fire" in the equations is utterly at variance with a naive materialist ontology.
Materialists don't get much forthright than Stephen Hawking. Yet even Hawking says we have no idea about the nature "fire". Turning Kant on his head, the possibility that one's own mind, the only part of the world that one doesn't know at one remove, discloses the nature of that fire has been best advocated in recent times by Michael Lockwood, but the idea was foreshadowed by Russell and Schopenhauer. No new physics is claimed, or at least certainly not by me: Strawsonian physicalists accept physics, or at least tomorrow's physics, is casually closed and complete.
Yes indeed: to the perceptual direct realist, the world looks classical, or at least quasi-classical. But that's because classical macroscopic worlds are a powerful mind-dependent adaptation computationally optimised by hundreds of millions of evolution to maximise our inclusive genetic fitness. Although quantum mechanics is commonly supposed mysterious and by contrast classical physics well understood, it's actually the emergence of quasi-classical "worlds" (decoherent macroscopic Everett branches) that is poorly understood, though David Wallace makes a valiant stab at it in:
* * *
Why we should support Obama:
("Americans favor Obama to defend against space aliens: poll")
Jonatas, apologies, I was speaking tongue-in-cheek about Obama and aliens. For a start, I suspect the principle of mediocrity means we're probably alone in our Hubble volume (ignoring as irrelevant here the question of other, effectively decohered Everett branches), though this inference rests on the contestable assumption that the proportion of primordial life-supporting Hubble volumes in which life arose more than once is vanishingly small. And even if intelligent life does exist within our cosmological horizon, IMO the likelihood of their invading in Obama's second term is small.
* * *
I completely agree with you that consciousness is causal. This causal efficacy is one of the advantages of Strawsonian physicalism. It's materialists who have to wrestle with the (for them!) intractable problem of how phenomenal properties can have the causal power to allow us to talk about their existence.
Yes, the subjective textures of what we call empathy can be divorced from its functional role, though the prospect of building artificially intelligent zombie super-empathisers sounds bizarre.
Dustin, could you clarify what you mean when you say that the mind/brain isn't running a simulation of the mind-independent world? The physical/phenomenal states of the mind/brain are indeed not intrinsically about anything external to themselves. But when one is awake, our mental ("perceptual") states continually track and causally co-vary with gross patterns in the macroscopic environment on account of cross-modally matched input from the optic and auditory (etc) nerve. Peripheral input selects, but doesn't create, our egocentric world-simulations. When we are dreaming, our world-simulations run more-or-less autonomously and psychotically. When one "wakes up" one doesn't cease to instantiate a world-simulation; but the contents of that simulation are more tightly constrained. Antti Revonsuo's "Inner Presence" is well worth reading if you have time.
Dustin, my key worry is your concept of "conveyance". Can you clarify!? In the waking state, does one access the mind-independent world - or merely simulate some of its grosser fitness-relevant patterns? As you know, I argue that egocentric macroscopic world-simulations are a fitness-enhancing adaptation run by a quantum computer, the mind-brain.
Note I'm not suggesting that digital computers have - or will one day have - some alien kind of consciousness. Rather they don't have any consciousness at all, beyond the (hypothetical) micro-qualia of their ultimate physical components. Digital software can't "bind" such micro-qualia into macro-qualia - what one might more naturally call the medium-sized physical objects that abound in one's virtual world.
But does such phenomenal binding matter? What I haven't done is shown - rather than suggested - that our phenomenology of mind is functionally indispensable for some kinds of intelligent behaviour. Insentient Pentagon drones can't yet outperform a bumble-bee. But AI is a young science.
Dustin, maybe our differences over "simulation" are mainly linguistic. Some dreams - and drug-induced hallucinations - are extraordinarily rich and detailed (One of the few activities dreams rarely allow is reading significant quantities of text.) What's mostly lacking in dreams, however, is critical reflection on their plausibility. I don't think waking life in one's world-simulation differs greatly in detail, more in narrative coherence - some of the time anyway!
* * *
This century my money's on "muddling through" too, But then I'd struggle financially to manage a piggybank. On the plausible assumption that one is no more than an average futurist, history offers little comfort:
("Future Babble: Why Expert Predictions Fail and Why We Believe them Anyway")
Dustin, I'd argue artificial biological agents and (maybe) artificial nonbiological quantum computers centuries hence can be unitarily conscious. Thus I'm not seeking to privilege the natural, or even the biological. Rather I'm arguing that a classical serial digital computer, or a classically parallel subsymbolic connectionist architecture, can't generate "bound" phenomenal objects, a unitary experiential field, or a unitary phenomenal self. Neurological accidents resulting in e.g. simultanagnosia, or motion-blindness, or indeed severe schizophrenia, illustrate the sheer computational power of our normal capacity for phenomenal object-binding, a unitary cross-modally matched perceptual field, and a unitary subject of awareness. I'd hesitate before setting out a list of things an insentient real-world digital computer will never be able to do. But it's quite extensive: zombies are ignorant of the nature of their ignorance.
Dustin, surely right now we can program a digital computer that we believe to be insentient to tell us it's in pain. Indeed the most heart-rending distress vocalisations and pleas for mercy can issue forth from the voice synthesiser of a silicon robot sprinkled with sulphuric acid. So have we created artificial nonbiological sentience? If it walks like a duck and quacks like a duck...?
No. Or so I argue at any rate
* * *
Alternatively, the Manifest Destiny of sentient organic robots augmented by digital AI.,y.2012,no.4,content.true,page.1,css.print/issue.aspx
("The Manifest Destiny of Artificial Intelligence", American Scientist)
Dustin, I'd wholeheartedly agree with you. On ethical issues, one should always, wherever possible, err on the side of caution. If there is any substantive doubt whether a system is a subject of experience, then it should be granted - and the entity in question should be treated as worthy of moral consideration. Post-Shulgin, I'm less pessimistic than you about the prospects of rigorously deriving the properties of our "bound" macro-qualia from the underlying microphysics: I just think the project will take thousands of years to mature and entail an ontological and methodological revolution. However, on the all-important question of the pleasure-pain axis, I don't think thousands of years will be needed to map out the frontiers of Hell, so to speak. Thus by manipulating the SCN9A gene (cf. "An SCN9A channelopathy causes congenital inability to experience pain: Abstract: Nature: we already know how to amplify or diminish the capacity to feel phenomenal pain - or abolish such capacity completely (via nonsense mutations). Later this century, I predict we'll have the knowledge to make the entire state-space of experience below hedonic zero off-limits.
Despite the common assumption that the panpsychist tends to overpopulate the world with sentience, recall that I'm arguing that only non-trivial quantum-coherent states instantiate interesting and potentially ethically significant forms of consciousness. Mere aggregates of micro-qualia don't non-negligibly matter - not inherently at any rate. This includes digital zombies - which don't IMO have a different "form" of consciousness from a doorstop.
Dustin, we agree: Strawsonian physicalism cannot by itself explain binding. Nor, on its own, can an explanation derive solely from invoking ultra-rapid sequences of irreducible quantum coherent states computationally optimised by hundreds of millions of years of evolution - if we assume a standard materialist ontology. But what if the two basic ideas are combined?
* * *
Do you need enhancing or remedying?
("Make Me Superhuman - the Top 10 Enhancenements I Crave")
Add several decades, in my estimate.
("Scientist Says Immortality Only 20 Years Away")
If the future is dystopian, I hope it resembles Huxley's vision rather than Orwell's:
Phasing out aging will presumably mean the end of procreative freedom in the West too...
("Pressure to Repeal China’s One-Child Law Is Growing")
Stumbling towards the abyss...?
("Experts condemn plans to lift ban on research into deadly H5N1 birdflu virus")
("Researchers eliminate aggression in birds by inhibiting specific hormone")
But will such knowledge come too late for humans? Utopia more wonderful than we can imagine IMO. Alas built on a mountain of corpses.
* * *
After discovering paradise engineering is now a dry-cleaning corporation, one worries about brand integrity:
("Paradise Engineering Corporation , Dry Cleaning Machine , India - Product Detail")
Do you trust self-confident optimists more than self-doubting pessimists?
("Why are people overconfident so often?")
Only the paranoid survive? Or chill out and sleep easy...
("Apocalypse Not: Here's Why You Shouldn't Worry About End Times")
But not all transhumanists are Singularitarians....
("My Falling Out With the Transhumanists")
I wish I had Steven Wolfram's confidence about anything at all...
("I Like to Build Alien Artifacts")
No law of Nature says organic robots must grow old...
("Wake Up, Deathists! - You DO Want to LIVE 10,000 Years!")
Would you conserve this version of the Matrix?
("Science Fiction or Fact: A Planet-Destroying Superweapon
* * *
(Jean Anthelme Brillat-Savarin)
Abolishing Suffering via Bio-Engineering and Drugs - would this cripple social activism and art?
* * *
Is the world's thermonuclear weaponry in safe hands?
("Psychopathic boldness tied to US presidential success")
History for those of us who have "maximalist utopian aspirations" is not encouraging:
("The Devil in History: Communism, Fascism and Some Lessons of the Twentieth Century")
For technical reasons, I suspect hundreds of billions years of sublime bliss lie ahead. But they a rooted in a Darwinian horror story. Historian Lewis Namier once remarked how “One would expect people to remember the past and imagine the future, but in fact…they imagine the past and remember the future." So the "future" most of us remember is a mixture of childhood science-fiction novels and Hollywood movies, Skynet and HAL. I'm much more sceptical than some of my transhumanist colleagues about nonbiological superintelligence, let alone a "robot rebellion".
Servants or masters?
("Swarming robots could be the servants of the future")
* * *
A hard-hitting piece by the estimable Hank Pellissier. I hadn't realised quite how many transhumanists - around half - give low weight to phasing out the biology of suffering:
("My Favorite H+ Philosophers - David Pearce, Martine Rothblatt, and Ursela K. Le Guin")
Should hunter-warrior minds have any place in politics?
("Who creates harmony the world over? Women. Who signs peace deals? Men")
A Conversation With Simon Baron-Cohen [4.30.12]
* * *
Dustin, I agree. In the absence of "hyper-masculinised" minds, blessings such as information technology, the prospect of systematically phasing out suffering, ageing and disease, and indeed the whole of Western science would be impossible...
"Primates wage war yes but they also cooperate". Indeed. Alas both evolutionary theory and the historical record suggest they cooperate primarily for the purpose of waging war - a conclusion that is rather less heart-warming.
or maybe we're the product of a botched experiment elsewhere...
("German woman fails to prove atom-smasher will end world")
Up to a point...
(Khudadad's Knols: "Can we blame evolution for terrorism?")
At most one can speak of a predisposition, conditionally activated. Alas in many men that predisposition is quite readily triggered.
But does post-Everett quantum mechanics suggest our survival is just an anthropic selection effect?
("The man who saved the world: The Soviet submariner who single-handedly averted WWIII at height of the Cuban Missile Crisis")
[though IMO equally compelling reasons could be given for why one will never fall asleep at night.
[on responsible stewardship of Nature]
Which forms of Darwinian life - if any - would you conserve?
Conservation Biology versus Compassionate Biology
* * *
Stefan, phasing out life in the meatworld via destructive uploading might lead not to digital nirvana but to digital zombies - a novel if sociologically implausible form of existential risk. Also, IMO there is a powerful indirect utilitarian case not to contemplate "exterminating" any sentient being, but instead upholding the sanctity of life. Such a stance does not entail reproductive rights for obligate predators and parasites.
"Equal rights for parasites"?
Thankfully not all conservation biologists
would be so bioconservative.
Stefan, to play devil's advocate, species essentialists who claim that behavioural modification entails loss of species identity should be encouraged to make their case without wearing clothes.
Time to design a different ecology.
("The Ecology of Fear")
OK, I spoke tongue-in-cheek. But the dilemma is real. On balance, I can't see a problem in citing even the most horrific historical research on human and non-human animals. But to cite contemporary research is tacitly to endorse it - which promotes its continuation. Of course, this is merely a Facebook wall: there's no sense in taking oneself too seriously here. But scientific journal editors have a responsibility to make clear to authors that unethically conducted research simply won't be published.
Chris, as you know, I entertain dark - and not very fruitful - negative utilitarian thoughts about the nature of the world. Hundreds of millions of years of pain and suffering, and more recently hundreds of years of sometimes ghastly scientific experimentation, have been necessary to throw up a scientific culture potentially capable of using biotechnology to build Heaven-on-Earth. Scrapping lethal, painful or otherwise harmful research on other sentient beings now would perhaps retard but IMO not fundamentally alter our evolutionary trajectory (For a counterargument to convergence, perhaps see e.g. )
Could things have been radically different?
Well, sadly I don't think the route to paradise in (most) other life-supporting Everett branches where organic robots evolve the capacity to phase out suffering is any less strewn with misery.
Let's hope I'm wrong!
There are indeed utilitarians who take such a robust approach, for instance:
("Why improve nature when destroying it is so much easier?")
But rightly or wrongly, allowing a species to go extinct does seem a radical step to take, regardless of how much harm its members cause to other sentient beings. Hence the case for genetic tweaking / behavioural modification of predators - at least until we can be sure we understand the ramifications of what we are doing. Of course, if one has just lost one's child to a snakebite, for example, then one may not be unduly concerned with the virtues of species conservation.
Do snakes have reproductive rights? Though I argue for genetic tweaking to prevent a species becoming extinct, I'm not really convinced such grisly behaviour has a long-term future.
Here is the PDF and PowerPoint of "Conservation Biology versus Compassionate Biology":
* * *
("The PhD’s Guide to Academic Conferences | Guest Blog, Scientific American Blog Network")
* * *
Ala the universe is a package deal:
("10 Reasons Why Oxytocin Is The Most Amazing Molecule In The World")
Should we use CRISPR genome engineering to enhance empathy?
("Individual differences in altruism explained by brain region involved in empathy")
How hard do you beat yourself up?
("Self-Compassion Fosters Mental Health")
Should addiction be promoted or discouraged?
("I want to know where love is: Research develops first brain map of love and desire")
* * *
One probably should use the term "welfare state" to an American audience...
("A Welfare State For Elephants? Costs and practicalities of comprehensive healthcare for free-living elephants
A case-study of compassionate stewardship of the living world")
An intelligent entry on a feeling whose biology I hope we can abolish altogether:
Thanks Joseph. I shan't ask whom you'd prefer to have as a household companion:
("Is this the world's cleverest dog")
A thoughtful critique from Alex Jones:
Alex Jones (Part 1)
[on antinatalism versus abolitionism]
Transhumanism in Brazil
I hope explore abolitionist alternatives to anti-natalist philosopher David Benatar's plea for human extinction (cf. Better Never To Have Been The Harm of Coming Into Existence
Are there better ways to phase out the biology of suffering? Can coming into existence be made inherently sublime?
* * *
Many thanks Marion, don't worry, I've virtually gone native.
(not entirely I confess. Vegan abolitionist utilitarians are not indigenous to São Paulo)
On Tuesday, I'll be in Santa Maria arguing that the ideology of conservation biology should be replaced by an ethic of compassionate biology
See too Oscar Horta on the tension between animal advocacy and environmentalism:
("Oscar Horta Interview")
Thanks Pierre. Joking aside, it is far easier to win an audience over if you speak in their own native tongue. Of course, very few people can emulate Daniel Tammet, who mastered Icelandic in a week.
* * *
Can a digital computer ever understand the nature of comprehension?
("'A Perfect and Beautiful Machine': What Darwin's Theory of Evolution Reveals About Artificial Intelligence")
* * *
The culmination of the Western tradition?'oh!_of_Homer
("The Simpsons and Philosophy: The D'oh! of Homer Simpson")
[on future drugs]
Would you like to alter your default state of consciousness?
Is the Future of Drugs Safe and Non-Addictive?
[on gender]
Vive la différence! Or a post-gendered future? Where do you stand?
The Future of Gender
("Women spend 43 weeks of their life applying make-up and perfecting their face before a night out")
The science of flirtation...
("Women who flirt get better deal")
Which ads tap in to the key to your soul?
("Research: Men respond negatively to depictions of 'ideal masculinity' in ads")
"Red is the ultimate cure for sadness.”
("Note to waitresses: Wearing red can be profitable")
("Alternating gender incongruity: a new neuropsychiatric syndrome providing insight into the dynamic plasticity of brain-sex.")
("Eyes Reveal Sexual Orientation")
("What’s So Bad About a Boy Who Wants to Wear a Dress?")
Does the future belong to doll lovers or truck drivers?
("Hormones Explain Why Girls Like Dolls & Boys Like Trucks")
Did Cardinal Ratzinger ever renounce his youthful oath of allegiance to the Führer?
("Pope Decides Gay People Aren’t Fully Developed Humans")
("Women's preferences don't fit popular theory. Why Women Don't Fall for Hairy Guys Remains a Scientific Mystery")
("The color of attraction? Pink, researchers find")
Would you take the news in your stride....?
("Hong Kong man finds he is a woman after doctor visit")
Human counterparts would be interesting:
("Mice lacking serotonin swap sexual preferences")
A rule of thumb, not an immutable law of Nature.
("Women make better decisions than men")
[on pleasure science]
Should a portion of U.S. military budget be diverted to fund the creation of “pleasure domes”?
The Future Science of Pleasure
"The Pleasure Dome Project is an idea to use fundamental physics to increase pleasure for the pursuit of happiness—to put the pursuit of pleasure on a firm scientific basis, rather than in the amateur ways we’ve pursued it so far as individuals."
My weapon of choice would generate utilitronium shockwaves.
Or alternatively:
Maybe future generations will be addicted to utilitronium...
("Addiction, the coming epidemic")
Yes indeed Jonatas. Actually, I was wondering. Assume that safe and effective ways to (re)calibrate the set-point of our hedonic treadmill do become available later this century. Which contemporary value systems are inconsistent with the development and use of technologies?
I promise I'm all in favour of intelligence-amplification. Right now, however, our understanding of the nature of intelligence is extremely primitive. For example, there may be a trade-off between empathetic intelligence and mathematical prowess.
Which is more important to promote?
How can the Golden Rule be extended to members of other species as well as other races...?
("Kin and Kindness: Michael Shermer reviews The Moral Molecule: The Source of Love and Prosperity")
How evil are your eyebrows?
("Evil eyebrows and pointy chin of a cartoon villain make our ‘threat’ instinct kick in")
Some more than others...
("The Touch of a Man Makes Women Hot, Just the touch of a man's hand can make women hot and bothered, though they don't always notice it.")
Up to a point...
("Why You Should Smile at Strangers")
Object sexuality or objectum sexuality, in German objektophil (OS)
When can paradise engineering become a mature academic discipline?
Building a neuroscience of pleasure and well-being:
Psychology of Well-Being: Theory, Research and Practice - a SpringerOpen journal
Good news for amorous hypochondriacs...
("Kissenger: virtual lips for long-distance lovers")
Is it possible to make a compelling movie about Heaven?
New Pleasure Circuit Found in the Brain:
"We hope...the discoveries will unite pleasure and purpose, elevating everyday experiences to something truly satisfying, and perhaps even sublime."
A paean to Mill's higher pleasures? Not exactly...
("Wireheading: The Conundrum of Über-Hedonism & Simulated Bliss | High Existence")
[on the goodness or badness of the world]
Do you share Michael Faraday's optimism?
TEDxDelMar: Envisioning Transhumanity
The trouble is that, conversely, nothing is too horrible to be true either if it be consistent with the laws of nature.
I'll be in San Diego for a few days, then Stanford, then North Carolina, then NYC. If you live nearby, it will be great to catch up (assuming I don't get overwhelmed by Life and retreat back to my burrow!)
* * *
Darker spirits may turn Faraday on his head. All we can do, I think, is try and prevent the existence of experience below hedonic zero in our forward light-cone - and perhaps hope that Reality isn't as big as I fear.
On a slightly different note:
("The Wonderful Future That Never Was: Flying Cars, Mail Delivery by Parachute, and Other Predictions.")
I don't normally quote Jesus, but perhaps the guy had a point: “Again I tell you, it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God."
Ezekiel, so long as you call it an invite to a Colloquium or Summit, we can meet up before then in the pub. (I don't take holidays, heaven forbid.) I'm especially interested in the use of enkephalinase inhibitors as potential mood- brighteners and analgesics. Endocannabinoids? Alas my brain is wired to take an afternoon nap:
("Wired to run: exercise-induced endocannabinoid signaling in humans and cursorial mammals with implications for the ‘runner’s high’")
I trust the paws of consenting undergraduates (or better, their professors) were used rather than captive nonhuman animals. I personally find cannabis induces profound derealisation, depersonalisation, introspection and philosophical rumination. So I'm rather envious of folk who can use it to get high:-)
Yes, taking psychoactive drugs can arbitrarily increase or diminish one's sense of the significance of things, quite independently of the propositional content of one's thoughts. I suspect posthuman life will not just be superhappy but hyper-meaningful too - though in what sense I don't know.
Envisioning Transhumanism had excellent speakers, stimulating late-night conversations and delicious vegan cuisine too. Superb. Transhumanism is clearly blossoming in San Diego - not so much Let-a-thousand-flowers-bloom as a veritable botanical garden...
What attributes does "God-like" conjure up in your mind?
("'God' is Cruel - we must conquer his 'Nature'")
Humans may well become transhumans who recursively self-improve themselves to become posthumans. Posthumans may enjoy some - but only some - of the attributes of divinity. But unless contemporary theoretical physics is wholly misconceived, most life-supporting branches of the Multiverse will always be inaccessible to rational agency - divine or otherwise. I hope I'm wrong.
Ezekiel, I'm afraid you're talking to a boring pillar of scientific orthodoxy (well, almost). I don't see how even posthuman superintelligence can e.g. defeat the second law of thermodynamics, or access other quasi-classical Everett branches that have decohered, or explore the zillions of different string vacua of M-Theory (etc). I'm quite willing to accept our conceptual scheme may be mistaken. But it's the best we've got - for now.
What each of us apprehends as the macroscopic world may indeed be a simulation run by the mind/brain. But perhaps where we differ is that I don't think e.g. the prebiotic Earth or other Hubble volumes or lifeless Everett branches (etc) are any less real simply because no one is around to observe them. The privileging of observers in Copenhagen-style quantum philosophising is an unfortunate legacy of positivism.
Most physicists would now accept that the notorious "collapse of the wavefunction" has no physical reality. The implications of the world being a gigantic superposition defy the imagination. Alternatives to Everett do exist, but they are ugly, to say the least (cf. )
The bookies' favourite is 11-dimensional spacetime (M-theory). No, I don't think Amit Goswani is delusional, indeed a physicalistic version of monistic idealism may even be true. But if so, the world is still, formally, exhaustively described by the continuous, linear, unitary, deterministic evolution of the universal wavefunction. No room of God, IMO.
Quantum chemistry makes my eyes glaze over too Ezekiel. But I think quantum (bio)chemistry explains why organic robots are normally conscious whereas silicon robots are perpetually zombies. I guess this is a topic for another post.
("Physics of life: The dawn of quantum biology")
If Boltzmann brains really exist, then one is overwhelmingly likely to be one. Likewise, if full-blown ancestor simulations exist, one is overwhelmingly likely to be one too. But any argument for either scenario should IMO explicitly set out in its premises what is normally only implicit, namely one's account of meaning and reference.
[just in case anyone imagines you are alluding to the glorious future promised by the Democratic Party of Turkmenistan]
Intelligent use of biotechnology can kill Satan off for ever. We can make His existence physiologically inconceivable. Alas this entails doing a fair bit of spadework in the Darwinian world.
("Lucky you! Accidents of evolution that made us human")
("What Would You Do - with the infinite extra years - If You Were Immortal?")
The idea we may conquer the biology of aging, but not of boredom, is surprisingly common:
("Do You Really Want to Live Forever?")
("Engineer Thinks We Could Build a Real Starship Enterprise in 20 Years")
Alternatively, the superposition principle of QM is the mathematically rigorous definition of non-existence - a Zero Ontology.
("Why Does the World Exist?: An Existential Detective Story")
Will neuroscience amplify or extinguish the self?
Should your brain code be proprietary or open source?
("Scientists developing device to 'hack' into brain of Stephen Hawking")
[on becoming transhuman]
Should humanists become transhumanists?
[on transhumanism in Texas]
Will a Texas audience be receptive to veganism, gun control and a pan-species welfare state?
SEBI Presents British Philosopher David Pearce
[on the Science and history of treating low mood ]
Can low mood be cured?
Post-Prozac Nation
[on the science of compassion]
Can the science of compassion inspire a technology to match?
The Molecular Biology of Compassion
How can testosterone poisoning best be overcome?
Testosterone makes us less cooperative and more egocentric, study finds
On the other hand,
Testosterone leads to fairness, not aggression: researchers
* * *
What are our odds of surviving this century?
Apocalypse Soon?
[on why anything at all exists]
Why is there "nothing" rather than nothing?
A Universe From Nothing
See too Jim Holt's wonderful little gem
Why Does the World Exist?
[on the upper bounds of biological intelligence]
Is biointelligence poised to explode or fizzle?
Evolutionary limits on cognition
Our website is now back online after "other agencies" (i.e. the US authorities) pressured registrar Godaddy into suspending nameservice. It's a non-commercial website. But apparently the US authorities objected to one of our links pages that contains the third-party URLs of online pharmacies and pain clinics.
I wonder when other countries are going to wake up to the immense and unaccountable power over the Net that control over the root nameservers gives the US government.
[on the abolitionist project]
I'm planning a short visit to the US. Would anyone like to meet up? I'm in Haverford 27-29 and Stanford on 1st Dec. But I can probably add to my carbon footprint...
I hope a Stanford audience is as herbivorous as the Quakers of Haverford...
Should conservation biology extend to Homo sapiens?
(the rejuvenation pills seem to be working:
The Moral Imperative of Transhumanism
(cf. The Abolitionist Project)
Lab-grown brains
Mark, if direct realism were a tenable account of perception, then we might indeed credibly claim, as you suggest, that "Every time you are 'in the flow' or lose yourself in thought while driving, you are a p-zombie." But (unless you close your eyes) the experiential contents of the world-simulation you instantiate while driving in a reverie don't disappear. Rather they are sometimes merely the backdrop to your stream of thought - as distinct from its focus. Awake or dreaming, organic robots are never P-zombies - or so I'd argue at any rate. Randall, understanding why Brook Greenberg (cf. doesn't age should help us devise radical anti-ageing therapies and eventually create "designer genomes" that replicate the desirable features of Syndrome X with the "age-freezing" process set at late teens or early twenties. Inorganic robots sometimes need enhancements, upgrades and part replacement. The same is presumably true of transhumanists blessed with utopian designer genomes. This doesn't mean some inexorable law of Nature condemns all robots to senesce without possibility of repair.
* * *
Another reason to go vegan:
Carnivores and Global Warming
Would you prefer a more "masculine" or "feminine" mind?
The He Hormone
Eat well but sparingly...
Eating less keeps the brain young
Half-starving the brain carries risks too.
Is post-Everett quantum mechanics a recipe for promiscuity?
Dating in the Multiverse
Do you find this idea extremely disturbing?
Excessive Worrying may have Co-Evolved with Intelligence
"How Happy Is Too Happy?
Euphoria, Neuroethics and Deep Brain Stimulation of the Nucleus Accumbens"
by Thomas E. Schlaepfer
A reproductive revolution is imminent...
Scientists rewrite rules of human reproduction
("Lab-grown egg cells could revolutionise fertility - and even banish menopause")
Horrific. The roots of suffering do not lie in the neocortex...
Food project proposes Matrix-style vertical chicken farms
How strong is your dopaminergic sense of things-to-be-done?
Differences in dopamine may determine how hard people work
("Differences in dopamine may determine how hard people work")
The world wouldn't necessarily be a better place if we all spent life in dopaminergic overdrive. Dopaminergic drugs can induce an inner tension and they aren't touchy-freely: we need safe, sustainable empathogens too. "Laziness" is often a form of masked depression, though dopaminergics are only a flawed remedy - they are better at inducing a sense of urgency rather than long-lasting well-being.
"The Immorality of Morality"
Morality and the Dopamine Reward System
The insula is rich in MAO-B, which low dose selegiline selectively inhibits. But I don't recall any study tackling this question:
("A Small Part of the Brain, and Its Profound Effects" - New York Times )
("Dopamine impacts your willingness to work")
("Power really does corrupt as scientists claim it's as addictive as cocaine."
Imagine if you were Roman Emperor. I am sure I would start off trying to be Dave the Just. I'd probably end up as Dave the Depraved.
It's often as rational to forget as to remember...
("Scientists identify neurotransmitters that lead to forgetting")
Sometimes I wonder if posthuman superintelligences will regard the differences between humans and insects as mere details:
("Sundown Syndrome-Like Symptoms in Fruit Flies May Be Due to High Dopamine Levels Changes in Flies Parallel Human Disorder")
Do psychostimulants impair creative thought?
("Allowing the Mind to Wander Aids Creativity: Breaks alone do not bring on inspiration, rather tasks that allow the mind to water are what foster creativity")
("Sleep Deprived? Mind your dopamine.")
"So levels of income are, if anything, inversely related to felicity."
Global (Un)Happiness
What lessons should we draw?
Does your virtue have a price?
The Price of Virtue
("Sex survey: third of Britons 'would sleep with a stranger for £1million'")
(H. L. Mencken)
Duality of Longevity Drug Explained
The rapamycin story.
Would you rather be stuffed, buried, cremated, freeze-dried or cryogenically suspended?
Can't bear to bury dear departed Tiddles?
Why not have him freeze-dried and keep him forever?
How solid are the foundations of modern linguistics?
'There is no such thing as universal grammar'
What is your earliest memory?
Earliest Childhood Memories
"Consciousness is substrate-independent", says Christof Koch:
The future cometh
Science, technology and humanity at Singularity Summit 2011
How strong is the evidence for this claim?
Check out "Death by Euphoria"... ("Is the end of the world really nigh? Science is moving ever closer to understanding how, and when, humanity may be extinguished")
Should the well-being of all sentience be the basis of civilisation?
* * *
Heart-warming. But our compassion should be systematic...
("Saved from a muddy grave: Baby elephant and its mother pulled from lagoon where they got stuck because they wouldn't be separated")
"The high number of abortions in Israel are delaying the arrival of the Messiah, Israel's two chief rabbis have said."
Abortions delay Messiah's arrival, Israel's chief rabbis say
"Girls with lighter locks bring home around £600 a year more than brunettes or red-heads."
Blondes have more funds
Buddhist-inspired interventions to extinguish desire might not work out as planned:
("Eliminating dopamine turns fruit flies into masochists")
Love-smitten consumers will do anything for their cars and guns
("Love-smitten consumers will do anything for their cars and guns") How can we best understand states of consciousness that have never been recruited by natural selection for information-signalling purposes?
("Dirty Pictures")
* * *
Do your sympathies lie with Schopenhauer or Aubrey de Grey?
What's your cognitive style?
("Scientists and autism: When geeks meet")
How many of your relationships could survive mutual mind-reading? ("The terrible truth. Technology can now see what people are thinking.")
Can we stop eating each other?
* * *
I hope Tyler Cowen is misquoted...
* * *
Do the deepest mysteries lie in the stars or in our minds?
("Not Such a Stretch to Reach for the Stars")
* * *
Will the prayers of the faithful be answered with another reprieve?
("The End Of The World Again")
Could eternal youth be genetically preprogrammed?
("DNA sequenced of woman who lived to 115")
Do you practise Radical Honesty or Tactful Diplomacy?
("The perils of polite misunderstandings")
Does mental health depend on irrational optimism?
("Brain 'rejects negative thoughts'")
What is humanity's most urgent challenge?
Might the Testosterone Theory of Greatness play a role too?
What are the upper bounds to human self-delusion?
(cf. )
Time to phase out Humanity 1.0... ("Domestic violence gets evolutionary explanation")
Human, transhuman or posthuman?
What should we be aiming for?
("Steve Fuller: it's time for Humanity 2.0")
When do you reckon a digital computer will match the sentience of an earthworm?
Which would you choose?
("Study finds we choose money over happiness")
How (fe)male is your mind? ("Women More Likely Than Men to See Nuance When Making Decisions")
What happens when you bring three guys who believe they are Jesus Christ into one room?
("Diary. Jenny Diski")
How would you improve your source code?
("Read / write your own genetic code")
Would you entrust medical diagnosis to a human?
("Dr. Watson: How IBM’s supercomputer could improve health care")
How rational is depressive realism?
("Study: Self-delusion may be a winning survival strategy")
* * *
Should open source software in genetics be encouraged?
("Welcome to the world of Biogenica; Genetic Engineering and Manufacturing")
* * *
The solution to weakness of will?
("The Sugary Secret of Self-Control")
* * *
'"6 months to bio-sausages."
What is a realistic timescale for closing the death factories?
("Meat without slaughter: ‘6 months’ to bio-sausages")
Could you fall in love with your (fe)male counterpart?
("You look good... just like me!: Lookalikes website uses facial recognition software to help singles find their perfect match")
How close to Milgram's 450 volt limit do you guess you'd go?
("50th anniversary of Stanley Milgram's obedience experiments")
Superlongevity or superhappiness? Which is technically easier?
("Imagining the Downside of Immortality")
Europe's forgotten 'religion':
("Europe's forgotten 'religion'")
* * *
What can humans learn from hyraxes?
("Social Network Equality Helps Hyraxes Live Longer")
Will salvation be universal - or only for the elite?
(Albert Schweitzer)
See you tomorrow I hope....
Until they don't...
("Stock markets can regulate themselves")
* * *
Darwinian life is grotesquely unfair:
("Nice Guys Finish Second, Women Finish Last")
* * *
Did you have a happy childhood?
("How your childhood is written in your face")
It pays to know the odds?
"Maths professor who's hit a multi-million scratchcard jackpot"
("Nice Guys Finish Second, Women Finish Last")
Popper, Feyerabend or Machiavelli? Whose work best captures the spirit of modern science?
"Free Radicals: The Secret Anarchy of Science", By Michael Brooks
When you enter a room do you try not to be noticed - or overawe folk with your sheer physical presence?
("Teenagers: Being 'scrawny' is not an option")
* * *
Do find Life meaningless / suffer from DDD (dopamine deficiency disorder)?
Or enjoy dopaminergic overdrive?
("A Trick of the Mind. Looking for patterns in life and then infusing them with meaning, from alien intervention to federal conspiracy")
* * *
Girls now eight times more likely to live to 100 than 80 years ago:
("She's the maths professor who's hit a multi-million scratchcard jackpot an astonishing FOUR times... Has this woman worked out how to win the lottery?")
Does the science of pleasure need more engineers?
("The science of pleasure: vice or virtue - which motivates you?")
Bad news for whales?
("The end of evolution? Scientists say human brain may have reached full capacity")
A minor anomaly or the key to the universe?
("Existence: Where did my consciousness come from?")
* * *
Would you like to wake up next century?
("Cryonics: the chilling facts")
Are you the holographic projection of a drama unfolding on a flat surface a few billion light years away?
("Existence: Am I a hologram?")
Professor Andrews believes "depression may actually be a natural and beneficial - though painful - state"
Beneficial to what or whom?
("Patients who use anti-depressants are more likely to suffer relapse, researcher finds")
"Everything we do is for the purpose of altering consciousness."
(Sam Harris)
Do you agree?
("'The Blog : Drugs and the Meaning of Life' : Sam Harris")
Do you practise "honest arrogance" or "hypocritical humility?"
("Narcissists Need No Reality Check")
What kind of baboon are you?
("Study of Alpha Male Baboons Shows It’s Stressful at the Top")
* * *
Should human intervention in the rest of living world be based on an ideology of conservation or compassion?
("ARZone Podcast 6 ~ Intervention, Interaction and Non-Interference - Animal Rights Zone")
* * *
Is your brain a sacred temple or a neurological slum?
("The Neurobiology of Bliss Sacred and Profane": Scientific American)
* * *
Do you strut or slouch?
("Your mother was right: Study shows good posture makes you tougher")
* * *
Are licensed "antidepressants" worthy of the name?
("In Defense of Antidepressants")
* * *
("Warning: Mad Scientists (Transhumanists) May Force You to Be Happy")
Are lean-faced men any better?
("Are wide-faced men rascals?")
In hell or in heaven...?
("Is living forever in the future?")
How about "I am trapped"?
("Rhesus monkeys have a form of self awareness not previously attributed to them")
Does unfairness make you indignant?
("Tendency Toward Egalitarianism May Have Helped Humans Survive")
Farewell Brazilian beach babes; hello scholarly work...
("IV Colloquium on Ethics and Applied Ethics (UFSM)")
* * *
What's going on?
("When the multiverse and many-worlds collide")
Will (post)humans travel to other solar systems - and if so, what's your best guess when?
("Pentagon dreams of Star Trek interstellar travel")
Perhaps 3-D print-outs are more likely.
A recipe for problems with the in-laws?
("Breeding with Neanderthals helped humans go global")
Do we have ownership rights over bodies?
("How we come to know our bodies as our own")
Is rational argument typically male dominance behaviour in disguise?
("People Argue Just to Win, Scholars Assert")
The death spasms of Mother Nature may still surprise us...
("Earth may be headed into a mini Ice Age within a decade")
Do you practise "benevolent sexism"?
("Chivalry is actually 'benevolent sexism', feminists conclude")
How strong is your "behavioural immune system"?
("The Behavioural Immune System")
To what age would you like to regress / progress?
("'I woke up in the wrong life'")
Will it fly?
("Transparent plane of 2050 where passengers can see the sky through the cabin walls")
Should utilitarians advocate market economics?
Income disparity makes people unhappy?
Any hot tips?
("Sell Descartes, buy Spinoza")
Are you a body hacker?
("Invasion of the body hackers")
Is humanity doomed? (cont.)
("Climate change, doomsday and the 'inevitable' extinction of humankind")
Can one have "true self"?
("The politics of authenticity")
Are you satisfied with your default state of consciousness?
("Underground Website Lets You Buy Any Drug Imaginable")
The Gulag and the Holocaust were not organised by bonobos...
("Ariel Casts Out Caliban: Bonobos, "Killer-Apes" and Human Origins")
Some branding problems defy easy solution...
("Sympathy for the devil?")
How recursive can you get?
("CultureLab: Thoughts within thoughts make us human")
Can anything be known inaccessible to science?
("Science is the only road to truth?")
Downhill all the way....or the best is yet to come?
("Evolution of sport performances follows a physiological law")
Time to get chronically loved up?
("How Love Conquers Fear: Hormone Helps Mothers Defend Young")
What will it take before we finally say: Enough.
("Concealed Cruelty - Pork Industry Animal Abuse Exposed")
Is eternal bliss a religious delusion or an engineering challenge?
("A happy life is a long one for orangutans")
Does only one species deserve to be free?
("Escaped cattle take over street")
Can there be a scientific counterculture? Or just woolly-mindedness?
("Hippie days: How a handful of countercultural scientists changed the course of physics in the 1970s")
Will the utopian visions of the 21st century have a happier outcome than those of the 20th century?
("'Invent Utopia Now, Transhumanist Suggestions for the Pre-Singularity Era', an Ebook by Hank Pellissier)
Fit to govern?
Beauty and the Beasts:
("The sight of a pretty woman can make men crave war")
Are you ready to transcend biology?
("So What’s The Deal With The Singularity Again?")
"Addiction is actually rooted in the brain's inability to experience pleasure."
How severe is your neurological deficit?
("'The Compass Of Pleasure': Why Some Things Feel So Good")
("'The Illusions of Psychiatry' by Marcia Angell")
Is life getting better?
("Japan's 'Sense-Roid' replicates human hug")
When will our ethics acknowledge that feelings are more important than a capacity for logical inference?
("Brainy parrot shows it can think like a 4-year-old child")
"Are humans capable of utilitarianism?"
("The Biology of Ethics")
* * *
Ideally would you tweak your organic body - or change it altogether?
("Dating website for beautiful people dumps 30,000 members")
How much do you want a stronger memory?
("Shock and recall: Negative emotion may enhance memory, study finds")
What (if any) aspects of human beings are worth conserving?
("Interview with Ramez Naam, Author of 'More Than Human'”)
* * *
Sadly my daily walk to Waterstone's coffee shop doesn't qualify, but perhaps next year...
("Vegan Bodybuilding & Fitness")
(William James)
Should the study of consciousness be an experimental discipline?
DIRTY PICTURES - Alexander Shulgin documentary movie trailer, SXSW 2010
* * *
When will killing and abusing pigs come to seem as morally disgusting as killing and abusing dogs?
("Chinese dog eaters and dog lovers spar over animal rights")
Are you too happy?
("Happy guys finish last, says new study on sexual attractiveness")
How conscious is your brain stem?
("Digging into our consciousness")
"No one gossips about other people's secret virtues."
(Bertrand Russell)
How discreet are you?
("Why We Love Juicy Gossip Mags")
* * *
(Henry Wadsworth Longfellow)
How easily disarmed are you?
("Looking for Empathy in a Conflict-Ridden World")
I don't want to be taller. I'd just prefer other men to be shorter...
("Standing up to fight: Does it explain why we walk upright, why women like tall men?")
The new title holder. Why are the world's oldest getting younger?
("Brazilian woman aged 114 is world's oldest person")
How are you spending your last 72 hours?
("May 21: Another Doomsday Upon Us? | May 21 Judgment Day, Harold Camping & Doomsday Predictions")
When will we stop our frightful treatment of other sentient beings?
("When Will Scientists Grow Meat in a Petri Dish?")
Can we make life fair?
("Egalitarian Planet: Five proposals to elevate society by reducing disparity")
* * *
Would you like to know your date of death as well as date of birth?
("The £400 test that tells you how long you'll live")
* * *
"Sanity and happiness are an impossible combination." (Mark Twain)
True or false?,0,7207899.story
("In China, gauging happiness is all the rage")
How can we transcend Darwinian psychology?
("Worries About Success Can Make You Successful")
“Beware the man of one book.”
(Saint Thomas Aquinas)
The library of transhumanism is diverse...
CONFERENCE PROGRAM | Humanity+ @ Parsons : NYC
Would you like to be human +, transhuman or posthuman?
("Do We Want to Be Supersize Humans? - Room for Debate")
How human are you?
("Belief in God is part of human nature - Oxford study")
Should we feel less guilty or more so?
("Doing good so you don't feel bad: Neural mechanisms of guilt anticipation and cooperation")
I guess the 107 wives of this Nigerian faith healer suggest high mating intelligence. Can secular rationalists compete?
("Always groom for one more")
How well-proportioned is your Mr Homunculus?
("'Little Human' Reveals Body's Most Touch-Sensitive Areas")
Non-human victims need protection too.
("'Cannibal' arrested after 'dinner' changes his mind")
Will posthuman bliss intensify our perceptions?
("Scientists show how adversity dulls our perceptions")
How can we abolish the pain of social rejection?
("Professor: Pain of ostracism can be deep, long-lasting")
How strong is your Machiavellian intelligence?
("How to tell when someone's lying")
"Art is making something out of nothing and selling it.”
(Frank Zappa)
Does great art bring out your inner philistine?
("Brain scans reveal the power of art")
Will "slut" ever acquire the positive connotations of "stud"?
("Why is the word 'slut' so powerful?")
How often do you smile when you're alone?
("Research reveals true worth of a smile")
* * *
Are you a contented bloodsucker?
("We actually 'become' happy vampires or contented wizards when reading a book")
" Everybody's private motto: It's better to be popular than right.”
(Mark Twain)
How often do you bite your tongue?
("Popularity Sucks: Kids Should Embrace Their Inner Loser, Author Says")
Can sceptics that one can derive an "ought" from an "is" still be sceptical when in extreme pain?
("'The Science of Right and Wrong' by H. Allen Orr)
Will the dopamine DRD4 gene help take us to the stars?
("Out-of-Africa migration selected novelty-seeking genes")
* * *
Should we use birth control rather than famine to regulate population growth?
("Birth control prescribed for Hong Kong monkeys")
Should we spread genetic malware?
("Happiness linked to a gene that comes in long and short versions")
("Observations: Artificial Intelligence: If at First You Don't Succeed...")
Should we be more Scandinavian?
("Life satisfaction and state intervention go hand in hand")
Do you yearn to be a disembodied soul?
("Men Think About Sleep & Food as Much as Sex | Men and Women's Sex Thoughts | Gender Differences")
Does anyone know you better than you know yourself?
("Who knows you best? Not you, say psychologists")
Do you thrive on stress?
("Does stress help us succeed?")
Would you prescribe blue pills or red pills?
("Which Is More Important: Truth or Happiness?)
In wartime, truth is so precious that she should always be attended by a bodyguard of lies.” (Winston Churchill).
How well protected is she now?
("Bin Laden brings out the best in conspiracy theorists")
How (un)manly are you?
("Think it's easy to be macho? Psychologists show how 'precarious' manhood is")
When chatbots surpass the best human conversationalists, whom would you prefer as a lifetime companion?
("Computer says: um, er... | Computers v humans")
Sanity or psychosis?
("Timing, meaning of 'I love you' differs by gender")
Does 'something' give your life purpose?
("The rewards of doing 'something'")
"But what could I eat...?"
(" - Vegan Recipes and Cooking Tips")
Is our horror story unique?
("'The Eerie Silence' by Paul Davies – review")
Would absolute power change your personal life?
("When it comes to infidelity, does power trump gender?')
Can you be young and wise?,0,2357158.story
("Botox blunts emotional understanding, study finds")
How much of your life is spent feeling annoyed?
("'Annoying' Book Review - The Invidious Irritants That Irk Individuals")
If you lost your head, would you want to grow another one?
("Scientists create stable, self-renewing neural stem cells")
Are we stuck?
("Scientists suggest spacetime has no time dimension")
* * *
"Life can only be understood backwards; but it must be lived forwards." (Kierkegaard).
Will the story have a happy ending?
("The untold story of evolution")
* * *
See you at, I hope.
David Pearce ARZone Live Guest Chat
April 23, 2011 at 3:00pm
* * *
Does signalling you're a "winner" make others feel a "loser"?
("Happiest places have highest suicide rates says new research")
Do you ever wish the world had fewer dimensions?
("Primordial weirdness: Did the early universe have 1 dimension?")
Who is pulling your strings?
("The Neuroscience of the Gut")
Excellent to see SIAI' [MIRI] Singularity FAQ published.
Could Section 2.9 be amplified?
("Singularity FAQ | Singularity Institute for Artificial Intelligence")
* * *
Should we encourage belief in free will?
("New Scientist TV: Why free will may be an illusion")
Do you need a conscientiousness pill?
("‘Longevity Project’ - Review - In 80-Year Study, Good News for the Diligent")
Might posthumans think humans had autism spectrum disorder?
("The science of empathy")
Do you take nootropics?
("Limitless Movie Thrills, But What’s The Future of Smart Pills?")
Should euthanasia be legalised?
("‘Cheerleading’ BBC to show an assisted suicide on TV")
Do you 'act your age'?,9171,2065254,00.html
("Amortality: Why It's No Longer Necessary to Act Your Age")
How well developed is your neurological capacity for embarrassment?
("UCSF team describes neurological basis for embarrassment")
Sad news. Walter Breuning was probably the last man alive who could remember the 19th century. He remained cognitively intact until the very end. Walter's advice to take daily aerobic exercise and eat only two meals a day strikes me as sensible - and worth emulating by all of us.
("'World's oldest man' dies at 114")
When do you think best?
("The Philosophy of Insomnia")
An amazing breakthrough...
("Languages Grew From a Seed in Africa, Study Says")
* * *
Are you happy with your default state of consciousness?
("Researchers argue 'addiction' a poor way to understand the normal use of drugs")
Do you ever worry that what you write may do more harm than good?
("In Praise of Marx")
Is it "objectively" better to build Heaven rather than Hell?
Or is ethics invented rather than discovered?
("The moral formula: How facts inform our ethics")
Do you share Ray Kurzweil's vision of The Singularity?
Post Transcendent Man
April 9, 2011 at 2:00pm
Lecture room B34, Birkbeck College
Can medical science treat depression?
("In praise of antidepressants")
Should we tamper with the wisdom of Mother Nature?
("Test children's genes before they have sex")
Are you a "militant atheist"?
("A.C. Grayling: 'How can you be a militant atheist? It's like sleeping furiously'")
Do you have temporal parts?
Temporal Parts (Stanford Encyclopedia of Philosophy)
Can posthuman superintelligence prevent all technical accidents?
("The Difference Engine: Wild blue coffin corner")
can we best overcome self-serving bias?
("The meat paradox: how we can love some animals and eat others")
Which are worst...sticks and stones? Or unkind words?
("Site Helps Slighted Stars Feeling Internet’s Sting")
Imagine you could genetically choose the hedonic set-point of your future children. What setting would you want for your children - on a scale of minus 10 (lifelong despair) to 0 (hedonic zero) to plus 10 (lifelong bliss) ?
("Sissela Bok - 'Exploring Happiness: From Aristotle to Brain Science' - Reviewed by Owen Flanagan")
The Shulgin Index, a comprehensive survey of the known psychedelics, from one of the greatest scientists who ever lived...
(Transform Press)
Can self-deception be rational?
("Does belief in free will lead to action?")
"Consciousness is a disease.”
(Miguel de Unamuno)
How healthy are you?
Topic: Theories of Consciousness Camp: Agreement
Simply taking free-form amino acid mix without tryptophan on an empty stomach can dramatically lower serotonin levels in humans. [This shouldn't be done by depressives.] Will you be experimenting?
("Brain Chemical Influences Sexual Preference In Mice")
How will the future smell?
("People Who Feel No Pain Can’t Smell")
Cheating the heat death of the universe might be a challenge...
("Can we live forever?")
Is religion doomed?
Physics predicts end of religion
("Religion may become extinct in nine nations, study says")
Does your brain buzz with mathematical functions?
("Genius at work")
The case for compassionate intervention in Nature...
Oscar Horta
Is taking drugs "probably linked to an inbuilt tendency to act without thinking?"
("What drugs do to the brain")
How do you respond when someone says "But I like the taste!" ?
(" - The video the meat industry doesn't want you to see.")
"Hell is other people", said Sartre, probably in French, yet solitary confinement is reckoned a cruel and unusual punishment. How much time do you enjoy spending alone?
("The power of lonely")
Are you a lone wolf or a sheep?
("Jumping On The Bandwagon Brings Rewards")
How do matter and energy generate the experience of God?
("Kevin Nelson: 'Near-death experiences reveal how our brains work'")
Overlords or servants...?
("The new overlords")
* * *
Don't worry. Be happy. Not exactly...
("Keys to long life: Longevity study unearths surprising answers")
The testosterone theory of IQ...
("U of A researcher questions whether genius might be a result of hormonal influences.")
Eating Animals - It's Mass, Mechanised Murder
("Tell Your Friends to Go Vegan")
The pioneer of artificial quantum computing - and leading exponent of post-Everett quantum mechanics - is speaking tonight at the Oxford Transhumanist Society.
David Deutsch on How To Think About The Future
March 10, 2011 at 6:00pm
And most humans, though this is harder to prove...
("Chickens are capable of feeling empathy, scientists believe")
At what age should children be taught about the Daily Mail?
("Should five-year-olds be taught about sex in such an explicit way?")
Does suffering cleanse your soul?
("Cleansing the soul by hurting the flesh: The guilt-reducing effect of pain")
What does your profile picture say?
("'The 4 Big Myths of Profile Pictures' - OkTrends")
How about a Happiness Explosion?
("Why an Intelligence Explosion is Probable")
But are the worst crimes committed by people with a misplaced sense of right and wrong?
("Criminal Minds Are Different From Yours, Brain Scans Reveal | Neuroscience & Psychology of Criminal.")
What would you do?
("What should you do if a cash machine overpays?")
Is humanity awaking from Aubrey de Grey's "Pro-aging trance"...?
('Ageless' animals give scientists clues on how to overcome the aging process")
Would you prefer children or nephews and nieces?
("Parents rationalize the economic cost of children by exaggerating their parental joy")
An apple a day...
("Polishing the apple's popular image as a healthy")
Is alcohol part of your cognitive fitness regimen?
("Research suggests alcohol consumption helps stave off dementia")
Is mood-enrichment a recipe for life-extension?
("Study: Happiness improves health and lengthens life")
Should we conserve the biology of primate dominance hierarchies?
("Staring contests are automatic: People lock eyes to establish dominance")
The Rise of the Machines?
("Knee-high robot wins Japan marathon")
"Every great advance in natural knowledge has involved the absolute rejection of authority.” (Thomas Henry Huxley)
("'The Evolution of Credibility' - The Scientist)
But you can't wake a zombie...
("'Automaton, Know Thyself: Robots Become Self-Aware': Scientific American")
Would you rather have a high hedonic set-point and locked-in syndrome - or be depressive and physically fit?
("Most 'locked-in' people are happy, survey finds")
Is the best cure for prejudice self-love?
("People with low self-esteem show more signs of prejudice")
Do good looks influence your political judgement?
("Rightwing candidates are better looking, study shows")
* * *
Domestic bliss?
("The world's biggest family: Ziona Chan has 39 wives, 94 children and 33 grandchildren")
Does our future lie as a single subject of experience or many?
("CultureLab: Telempathy: A future of socially networked neurons")
Do you want a body-image?
("Researchers use virtual-reality avatars to create 'out-of-body' experience")
Do you think of yourself as a victim?
To escape blame, be a victim, not a hero, new study finds
* * *
Men killed over 100 million human beings - and billions of nonhuman animals - last century. What's your best guess at this century's total?
("Women-Only Leadership: Would it prevent war?) Right now I’m reading
Disturbing stuff. Not everyone here thinks catastrophic thermonuclear war is likely this century. But when I wrote the original note on Hank’s FB Wall, I was musing purely on technical ways to reduce global catastrophic risk and existential risk. Thus if an expert study group were to conclude that there are no grounds for supposing electing all-women leadership - and the sea-change in political culture entailed - would lead to a statistically significant reduction in the risk of war, then I wouldn’t support the proposal - it’s not about radical feminism, political correctness, “gender war”, etc. If, on the other hand, a statistically significant reduction in risk can be anticipated, then men and women alike would be well advised to elect our representatives accordingly. Without broad democratic consent, the proposal couldn’t possibly work. Risk reduction is in all our interests, men and women alike. Unfortunately, in the ancestral environment of adaptation, waging war was often genetically adaptive for males because of the optimal reproductive strategy for men differed from the optimal reproductive strategy for women. Selection pressure ensured that this male biological propensity for competitive risk-taking, territorial aggrandizement and violent aggression has endured into the present era. Hence the threat of nuclear Armageddon.
* * *
An all-female political class is an appallingly crude and discriminatory way to reduce global catastrophic and existential risks. But the proposal has nothing to do with political correctness. Rather the question is whether an all-female legislature and executive would make any statistically significant difference to the likelihood of use of weapons of mass destruction this century.
For example, would an all-female political class be any less likely to fund, develop and authorise the use of nuclear weapons systems as the existing male-dominated power elite?Â
If critics above are correct, then no statistically significant difference can be expected. Maybe so: I’d just urge rigorous evaluation of the proposal on its technical merits rather than a knee-jerk response. Recall that among our close relatives, chimpanzees, war-like behaviour towards neighbouring tribes is practised entirely by males - though females can individually can be individually just as vicious. Historical and ethnographic evidence supplemented by evolutionary psychology confirms that little has changed in the genus Homo over the past few million years beyond absolute killing capacity: “amazons” are not unknown but rare.
Yet haven’t civilised 21st century humans transcended our primitive sex-typical biology? Surely we can relegate traditional gender stereotypes to history?
Giulio, I hope you’ll forgive my doubt.
Nicer, I promise I have no intention of “vilifying” testosterone (or indeed men). Rather we’re confronted with a striking fact. Throughout history, and throughout prehistory, and among our closest primate relatives, wars have been overwhelmingly instigated and waged by members of one gender. Evidently, genes and cultures have co-evolved. They interact in complex feedback loops. So an explanation of the disproportionately warlike behaviour of one gender involves cultural factors as well as a detailed hormonal and neurobiological story. But simply invoking “culture” as an explanation is not enough - any more than simply invoking biology (or indeed “testosterone”). Presumably we want to understand how, and why, independently evolved and otherwise disparate human cultures exhibit this striking uniformity of behaviour i.e. why throughout history men and not women have been “warriors”, whether we’re considering New Guinea tribesmen, Yanomami Indians, or Cold War advocates of “preventive” thermonuclear strikes against the “enemy”. Or alternatively, is this cross-cultural consistency of gender-roles merely a freakishly improbable coincidence?
Of course testosterone isn’t “bad”: it’s just a steroid hormone. High testosterone function is associated with optimism and vitality. The “male hormone” is critical to female sexual response. Testosterone has undoubtedly played a key role in some of our highest scientific achievements as a species. Unlike in lizards, for example, boosting testosterone function doesn’t automatically boost aggression. In humans, this trait is only conditionally activated. In other circumstances, as you note, testosterone can promote e.g. status-promoting concern for fairness. This is no more of a paradox than pointing out how our favourite pro-social intoxicant, ethyl alcohol, is disproportionately implicated in domestic assaults and crimes of violence. The significant causal role of ethyl alcohol in (much) violent crime doesn’t make alcohol a “violent drug” any more than testosterone is a “violent hormone”. It’s merely a risk factor.
In one sense, however, this discussion is academic. I predict that members of the gender that killed over a hundred million people last century will kill hundreds of millions of people this century. Whether this outcome could have been averted by electing all-women political representatives is unlikely to be put to the test.
* * *
Are you a transhumanist?
(Transhumanists Coming Out of the Closet")
* * *
When (if ever) will you be emulated by AI?
("Mind Versus Machine")
* * *
How much of your life is spent above neutral "hedonic zero" ?
("George MacKerron: 'I can measure how happy you are – and why'")
* * *
What scenes do meat commercials leave out?
("Farm to Fridge - The Truth Behind Meat Production")
* * *
Would power corrupt your character?
("Could You Become a Dictator?")
* * *
Are you outraged?
("Left is mean but right is meaner, says new study of political discourse")
* * *
2045? I wish...,9171,2048299,00.html
("2045: The Year Man Becomes Immortal")
* * *
Can we reduce global testosterone production to safer levels?
("Extra testosterone reduces your empathy")
* * *
No word on the condition of the poor bird...
("Man stabbed by cockfighting bird")
* * *
Supergran video. Amazing.
("Handbag-wielding grandmother first interview: 'somebody had to do something'")
* * *
Are you sending the right signals?
("'He loves me, he loves me not...': Women are more attracted to men whose feelings are unclear")
* * *
Time to stop playing roulette?
("Born miserable - some people genetically programmed to be negative")
* * *
Extrapolating, what year will a digital computer match the sentience of a flatworm?
("Meet Watson, the computer set to beat Jeopardy's champions | Technology")
* * *
Alas I've always seen cats through the eyes of a mouse.
("Cat Ladies - the documentary")
* * *
How can we shut the death factories?
("Supermarkets force abattoirs to fit CCTV after secret film exposes abuse")
* * *
Do you agree with Nietzsche?
("Amor fati")
* * *
The case for beating oneself up.
("Feel the pain, shed the guilt")
* * *
Perhaps add LSD and stir?
("Flash of fresh insight by electrical brain stimulation")
* * *
Why it's best to date the cock of the walk?
("Bush-league male mates stress out female finches")
* * *
("How spinach makes you big and strong like Popeye")
* * *
Are your beliefs temperature-dependent?
("Feeling warm makes people more likely to believe in global warming, study finds")
* * *
The Coffee Conundrum:
("How coffee can boost the brainpower of women... but scrambles men's thinking")
* * *
"The parallel nature of the human brain implies that general intelligence does parallelize well."
* * *
"Happy-People Pills for All" by Mark Walker...
* * *
US Scientists work to grow meat in lab:
* * *
The pdf of my H+ talk. Alas more of a diagnosis than a prediction...
* * *
Men Forgive Girlfriends Who Cheat - If It's With A Woman
* * *
Model predicts 'religiosity gene' will dominate society:
* * *
Google censors peer-to-peer search terms:
* * *
Ogling by Men Subtracts from Women's Math Scores:
* * *
Infants ascribe social dominance to larger individuals:
* * *
On the hunt for universal intelligence:
* * *
You Are Not Qualified to Run Your Own Brain:
* * *
Dietary Fat Intake and the Risk of Depression: The SUN Project
* * *
It's Me or the Dog! Who Would You Choose?
* * *
From Neurons to Nirvana:
* * *
The language of young love: The ways couples talk can predict relationship success:
* * *
Eliezer Yudkowsky on true AI: humanity's most consequential invention:
* * *
* * *
Bristol team pioneers depression surgery technique:
* * *
Virtual self can affect reality self:
* * *
("'The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos' by Brian Greene")
* * *
("Lives of the Philosophers, Warts and All")
* * *
"Default menu: Steak, Lamb Burgers, Bacon..."
Less wrong or catastrophically mistaken?
("Newtonmas Meetup, 12/25/2010 - Less Wrong")
* * *
Can we build emotional superintelligence?
("Emotional intelligence peaks as we enter our 60s, research suggests")
* * *
("George Clooney Effect? High-earning women want older, more attractive partners, research finds")
"Boredom Enthusiasts Discover the Pleasures of Understimulation
Envoy of Ennui Calls a Meeting; An Energy Bar for Everybody)"
* * *
("MoNETA: A Mind Made from Memristors
* * *
("Tories may be born not made, claims a study that suggests people with right
wing views have a larger area of the brain associated with fear.")
* * *
("Far-Flung Movies May Inspire Future Scientists")
* * *
("Docs Detail CIA’s Cold War Hypnosis Push")
* * *
("Blondes have more funds: How reaching for the bleach could see you earn £600 more than brunette colleagues")
* * *
("Eliminating dopamine turns fruit flies into masochists")
* * *
* * *
("Social whirl of a life? Thank your amygdala
Researchers find almond-shaped clump of nerves in brain is larger in more gregarious people")
* * *
("Storing Lungs For Future Transplants
New technology is becoming quickly available to store donor lungs and keep them viable until transplanted into a patient.")
* * *
("Guardian angels 'protect third of Britons')
* * *
("Futurology: The tricky art of knowing what will happen next")
* * *
("Boosting supply of key brain chemical [acetylcholine] reduces fatigue in mice")
* * *
("What makes a face look alive? Study says it's in the eyes")
("Gerbils also get the winter blues")
Bite Me: An evolutionary case for cannibalism.
* * *
("Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind"
Robert Kurzban)
* * *
(Marilu Henner's Super-Memory Summit"
Actress Marilu Henner is becoming known for more than just "Taxi." She's one of
the handful of people who scientists say can remember their entire lives.")
* * *
("Pretty Women Make Simple Men
Men become simpler in the presence of a beautiful woman.")
* * *
The first European gathering of the Triple Nine Society (egg999).
* * *
("The fearless woman who's lucky to be alive")
* * *
("MDMA: Empathogen or love potion")
* * *
("A Bayesian Take on Julian Assange")
* * *
("The key to being attractive (and looking healthy)? A good night's sleep")
* * *
("Why are books on ethics so likely to be stolen?
A surprising study shows that classic (pre-1900) ethics books are twice as likely to go missing as other philosophy books")
* * *
("Werner Herzog on Nature...")
* * *
("Old and Wise: Why Do Smarter People Live Longer?
("Bees help to explain the link between intelligence and long life")
* * *
("Scientist shows link between diet and onset of mental illness")
* * *
("Everyone thinks everyone else has less free will")
* * *
("Happiness doesn't increase with growing wealth of nations, finds study")
* * *
("Curiosity's Evil Twin Can Drive You Insane")
* * *
("Database on how 'bees see world'")
* * *
("Skin was the first organ to evolve")
* * *
("Muslims Save Jews in Untold WWll Story
Exhibit showcases photographs of Albanian Muslims who sheltered Jews during the Holocaust ")
* * *
("Oh No You Didn't: Emotional Regulation And The Online Community")
* * *
("Face memory peaks late, after age 30")
* * *
("Face mask that's so good every crook wants one")
* * *
("The Top Ten Daily Consequences of Having Evolved
* * *
("Look: What your reaction to someone's eye movements says about your politics")
* * *
("Sexual selection: Hunkier than thou
* * *
("Irritable Male Syndrome")
* * *
("People with 'warrior gene' better at risky decisions")
* * *
("Wealth and ambition
("Rats living in fancier digs seek richer rewards")
* * *
("Giant storks may have fed on real-life hobbits
"Bones of small humans and giant birds, found together, tell chilling tale")
* * *
("Autism Breakthrough? D-Cycloserine Treatment For Impaired Sociability")
* * *
("What Zen meditators don't think about won't hurt them")
* * *
("McGill Researchers Prolong Worms' Life With Banned Herbicide")
* * *
("Food: A taste of things to come?
"Researchers are sure that they can put lab-grown meat on the menu — if they can just get cultured muscle cells to bulk up.")
* * *
("Blueberries and other purple fruits to ward off Alzheimer's, Multiple Sclerosis and Parkinson's")
* * *
("Unlocking the secrets of our compulsions")
* * *
("The teenager who sleeps for 10 days")
* * *
("Feeling chills in response to music")
* * *
("Viagra and porn used to tempt pandas to breed
A conservation project in China has produced 136 panda cubs, with the help of some imaginative but controversial techniques.")
* * *
("Flaming drives online social networks")
* * *
("Baby Names Reveal More About Parents Than Ever Before")
* * *
("Study reveals 'secret ingredient' in religion that makes people happier")
* * *
("Test Your Insight: Scientists have found indications that your ability to jump to intuitive answers — what they term the “Aha!” moment — may be affected by your mood")
* * *
What will this unspeakable woman do next? ("Sarah Palin shoots caribou... after missing five times: Sarah Palin was shown shooting a caribou on the latest episode of her reality television show.")
* * *
("David Nutt: 'The government cannot think logically about drugs'")
* * *
("Low-status leaders are ignored, researchers find; How a leader is picked impacts whether others will follow")
* * *
("Do you really want to live forever?")
* * *
("WikiLeaks Secrets: Is Gossip Good?")
* * *
("Flaws in a long-accepted test used to search for signs of self-awareness are revealing that selfhood varies culturally and exists on a continuum")
* * *
("Who Is Happy and When? by Thomas Nagel")
* * *
("Fear of being envied makes people behave well toward others")
* * *
("The Schtory Of Schmeat: Vladimir Mironov's Lab-Grown Chicken")
* * *
("Finger length predicts mental toughness in sport")
* * *
("Singularity Science Theater 3000: How Reverse-Engineering Postponed Artificial Intelligence")
* * *
("Antonio Damasio explores consciousness in Self Comes to Mind. How Weird Is Consciousness?)
* * *
("Fountain of youth in your muscles? Researchers uncover muscle-stem cell mechanism in aging")
* * *
("Like to Sleep Around? Blame Your Genes")
* * *
("How to Save the World")
* * *
("Disgusting Pictures Can Make You Look Good")
* * *
("Should you strike a powerful pose?")
* * *
("The primitive social network: bullying required")
* * *
("Factory Farming: Why I Choose to be a Vegan: Must Watch This and Share!")
* * *
("Paradoxical Truth")
* * *
("Does living in the city age your brain")
* * *
("Debate: Does the Universe have a purpose?")
* * *
("Don't treat those in long-term pain as junkies")
* * * ("Artificial intelligence: No command, and control
Chaos fills battlefields and disaster zones. Artificial intelligence may be better than the natural sort at coping with it")
* * *
("Olga Kotelko, the 91-Year-Old Track Star
The Incredible Flying Nonagenarian")
* * *
("Female fish -- and humans? -- lose interest when their male loses a slugfest")
* * *
("People Behave Badly When It's Easy")
* * *
("Thoughts of religion prompt acts of punishment")
* * *
("A tilt of the head can lure a mate")
* * *
("For macaques, male bonding is a political move")
* * *
("The science of decisions")
* * *
III Colloquium on Ethics and Applied Ethics: "Evolution and Transhumanism"
November 17, 2010 at 8:30am
Federal University of Santa Maria (CCSH) Brazil, RS
[on future suffering]
"Against Wishful Thinking" by Brian Tomasik:
I fear Reality may be worse than Brian imagines. It's probably unwise to say such things, but if the multiverse had an "OFF" button, then I'd press it - despite my tentative belief that we're destined to phase out the biology of suffering in our forward light-cone and enjoy life animated by gradients of intelligent bliss orders of magnitude richer than anything physiologically accessible today.
So why support initiatives to reduce existential and global catastrophic risk? Such advocacy might seem especially paradoxical if you're inclined to believe (as I am) that Hubble volumes where primordial information-bearing self-replicators arise more than once are vanishingly rare - and therefore cosmic rescue missions may be infeasible. Suffering sentience may exist in terrible abundance beyond our cosmological horizon and in googols of other Everett branches. But on current understanding, it's hard to see how rational agency can do anything about it.
1) Politics, they say, is the art of the possible. Advocates of voluntary human extinction have zero political prospects. David Benatar's plea for human extinction via voluntary childlessness falls victim to the argument from selection pressure. Technically, we could probably sterilise the planet with e.g. cobalt-salted multi-gigaton thermonuclear Doomsday devices. Such devices are not going to be built. Further, the creation of self-sustaining bases on the Moon and Mars later this century means in any case such mega-weapons wouldn't eradicate life in the solar system. The window of opportunity - or alternatively window of risk - of human extinction is small: perhaps only a few decades.
More realistically, I think anyone who cares about suffering should instead promote a messy, complicated, and piecemeal approach centred on biotechnology, in vitro meat development, and later high-tech Jainism - and of course the hundreds of health, education and welfare initiatives practised locally around the planet today. Life based on gradients of well-being is potentially saleable; world-destruction isn't.
2) If we go to the trouble of phasing out the biology of suffering on Earth, how likely are we to recreate the miseries of our past in terraforming solar systems in the rest of the Galaxy and beyond? Exceedingly naive as this sounds, isn't there anything akin to ethical progress? How likely is the creation of suffering in an era of radical transparency and ubiquitous neuroscanning? Like, say, two mirror-touch synaesthetes having a fist-fight, creating suffering may come to seem irrational and absurd. I understand Brian's point is that the exponential increase in computational power means that someone, somewhere is likely to proliferate digital hell-worlds. Here Brian and I disagree over the prospects of digital sentience - and whether unitary conscious minds are essentially classical or quantum phenomena. As I discuss elsewhere, possibly the greatest cognitive achievement of organic minds over the past few hundred million years has been to solve the binding problem - and run data-driven, cross-modally matched egocentric world-simulations of the local environment in almost real time. I know the assumption that a classical digital computer can be conscious - and support "brain-emulations" that are conscious - is quite widely shared in the AI community. However, there is no empirical evidence to support this conjecture.
3) We simply do not understand Reality - and therefore we do not understand the upper bounds on rational agency. Posthuman superintelligence will presumably be better cognitively equipped than humans to take the decisions needed for responsible stewardship of our Hubble volume.
Brian, one argument that some futurist critics make against promoting superhappiness is that the outcome will be the opposite of what you most fear. By seeking too much bliss too greedily now, runs this argument, we'll get trapped in suboptimal local maximum here on Earth - a blissful but stagnant Brave New World, so to speak, or maybe even the functional equivalent of wireheading. For it's much easier to engineer raw bliss than ultraintelligent, pro-social information-sensitive gradients of superhappiness.
Technically, at any rate, I agree with you in one sense: the adaptive radiation of intelligence across our local supercluster will in theory leave scope for creating immense suffering elsewhere - a capacity that, if exercised, presumably dwarfs the sufferings of naturally evolved Darwinian life if such life really does exist elsewhere in our Hubble volume. I don't think this argument holds for the abolitionist project narrowly conceived, i.e. simply phasing out the biology of suffering. But if your worries about the propagation of suffering are well founded and the critics are correct, shouldn't you be arguing in favour of aggressive near-future happiness maximisation?
[Thanks for the kind words. Sad to day, I tend to find my virtue is a function of whether I think anyone else is watching. Darwinian humans are frail creatures. Roll on posthuman paradise...]
Jeff, thanks for clarifying your position. First, I agree with you about the implicit conceptual dualism of materialism / orthodox physicalism. Perhaps radical eliminativists about consciousness like Dennett escape the charge of dualism; but I find eliminativism incredible, literally, although I know of only one definitive counterexample. On the fact of it, Strawsonian physicalism is a dualist story too. After all, there are many sorts of entity in the natural world that are not subjects of experience, for example a rock, a galaxy, a brain in a dreamless sleep, the population of China, and (I'd argue) a classical digital computer. However, reductive physicalism imposes extremely tight constraints on the ultimate furniture of the world. There is no "element of reality" that isn't captured in the formalism of physics - the master equation of the Theory Of Everything beyond the Standard Model and its solutions. All the higher-level objects in our conceptual scheme must ultimately be cashed out in terms of fundamental physics. In philosophy-speak, I argue for "mereological nihilism"
Mereological nihilism is hard to reconcile with the existence of bound phenomenal objects and the fleeting synchronic unity of the self. Unless we're in a dreamless sleep, the fact we're not just fields of "mind dust" is what makes the binding problem so challenging, at least if we naively assume that the mind-brain is essentially a classical information processor.
One more point. Berkeleyan idealists and post-Kantian idealists did indeed argue that the world is made up of ideas. And I'd certainly argue what each of us apprehends as the mind-independent world is only a toy simulation that the mind-brain is running of the mind-independent world - or at least some or other quasi-classical Everett branch of the mind-independent world. But Berkeleyan or post-Kantian idealism are distinct from the pan-experientialism of Strawsonian physicalism. On the Strawsonian physicalist account, if fields of microqualia the stuff of the world, then the nature of these micro-experiences must presumably be unimaginably more primitive than a mental idea or a perceptual object in our minds. Compare how stimulating with microelectrodes the nerve cell of an awake subject may trigger, say, a brief speckle of colour somewhere in one's visual field. If Strawsonian physicalism is true, then the ultimate experiential simples of the world's fundamental fields must be smaller, simpler and fainter than this fleeting speckle by orders of magnitude. Such a gulf is one reason why many scientifically literate people find panpsychism - even dressed up in the fancy language of Strawsonian physicalism - so implausible, though of course panpsychism has a venerable history in philosophy.
* * *
Jeff, materialism and idealism are radically different ontologies. Materialism says the world is made up of non-sentient "stuff". Idealism says the world is made up of experience ("qualia"). Physicalism is generally reckoned a close cousin of materialism: the behaviour of the stuff of the world is exhaustively described by the equations of physics. Idealism, by contrast, is normally associated with Bishop Berkeley or post-Kantian German philosophy. Unfortunately, if materialism / orthodox physicalism were true, we'd be zombies. Hence the Hard Problem of consciousness and Levine's Explanatory Gap. A Strawsonian physicalist, on the other hand, takes seriously the fact that physics is silent on the intrinsic nature of the stuff of the world - the "fire" in the equations. Fields (superstrings / branes) in fundamental physics are defined purely mathematically. Would a world made up of fields of microqualia whose behaviour is exhaustively be described by the equations of physics be empirically any different from our world? If so, how?
I think Strawsonian physicalism is a precondition for any explanation of how subjects of experience are possible. But we still need to show how organic minds solve the binding problem.
Here is Strawson's original paper: "Realistic monism: why physicalism entails panpsychism’:
David Chalmers considers Strawsonian physicalism (what he calls Russellian monism) and the binding problem (what William James called the combination problem)
before opting for a naturalistic dualism. In my view, dualism is a counsel of despair.
Thanks Brian. Sad, to say, cats are one reason I'm a negative utilitarian.
("Mice Versus Cats: The Verisimilitude of Art Spiegelman's 'Maus: A Survivor's Tale' ")
Classical and positive utilitarians alike want to optimise the world into some kind of cosmic orgasm. Negative utilitarians can settle for gradients of intelligent bliss.
Jonathan, yes, I think a strong pragmatic case can be made for convergence. Life animated by gradients of intelligent bliss may not be ideal; but it's still a recognisable approximation of Heaven. Alas intellectually we may still find it troubling that both NU and CU are, on the face of it, ethically committed to destroy the world if the need or the opportunity arises - the NU to avoid a mere pinprick, the CU to convert a rich posthuman civilisation into utilitronium. Of course, we have no grounds for supposing our ethical intuitions are any more reliable than folk physics - and there may be strong indirect utilitarian arguments for NUs and CUs alike to shut up about world-destruction. But presumably we want an ethic to be proud of...
* * *
[on the kappa opioid receptor]
in Boston for the Kappa Therapeutics Conference.
Kappa is perhaps the world's nastiest, most evil receptor.
The Kappa Connection
If anyone fancies meeting up for an early breakfast neurobabble, I'm here in Boston until 28th. I assumed I could pass incognito, but I was handed a name tag to wear by the conference organisers: "Hedonyx Unlimited." Ouch. Very droll...
Here with fellow vegan, JDTic researcher and bitcoin entrepreneur James Evans. My body keeps English time, so early is great. Breakfast at the hotel starts at 7.00 a.m. I'll be up drinking coffee long before. I still don't understand JDTic, the world's first orally active selective kappa opioid antagonist. Also, I've never been brave enough to try Salvia divinorum - a kappa agonist with ultra-weird dissociative effects that often induces dysphoria. Salvia is a partial dopamine agonist too, which explains why not everyone gets freaked out by taking it. In the long run, I suspect the world will be better off without any CNS kappa receptors at all.
Oh, Andre, I agree, the kappa receptor has a purpose, i.e. to modulate all sorts of unpleasant experiences that help our genes leave more copies of themselves. Fortunately, the nature of selection pressure will change as the reproductive revolution of designer babies gathers pace...
"Fast-acting antidepressant effect": good. "Getting high": bad. Difference in hedonic tone: indistinguishable. I am biting my tongue here...
Indeed so Andres. Engineered kappa knockout humans might display signs and symptoms similar to kappa lockout mice. I've just learned more about why JDTic isn't being further developed: at 1mg in clinical trial, post-dose nonsymptomatic and nonsustained ventricular tachycardia. Frustrating. We're focused on selective kappa antagonists, but for the action of a kappa agonist, see
I guess a true scientist would seek to explore heaven and hell alike. Not me:
If I had to hazard a guess, the molecular keys to Heaven and Hell lie ultimately not at the synapse but internal to the nerve cell. We interrogate the synapse because it's easiest to investigate...
More on the kappa connection:
("Brain's 'dark side' as key to cocaine addiction")
But where are the wonderdrugs?
* * *
[on transhumanist marriage]
"The highest happiness on earth is marriage."
(William Lyon Phelps)
Holy wedlock! Dave is in Montreal for Nick Bostrom's wedding. Will transhumanists spearhead a return to traditional family values?
* * *
Both marriage and emasculation statistically confer longevity benefits for males, though there are pitfalls to consider too.
("The secret to a longer life? A puppy, a happy marriage and plenty of good friends")
Nick wants to go skiing. Some folk have no conception of existential risk.
* * *
Eray, in principle yes, but this was a gathering of northern prairie voles, not southern swingers.
("I get a kick out of you")
* * *
(Iris Murdoch).
One day so, perhaps: I'd just be happier if relationships were a branch of applied engineering rather than a leap into the unknown...
* * *
How to consecrate future marriages?
("My chemical romance: can medicine cure divorce? Could a new love drug help us beat the divorce statistics?")
In a world without emotions, would anything matter at all? Hedonic tone is what gives significance to our lives, whether one happens to be a roundworm or an Oxford professor.
* * *
Should genetic casinos be mandatory - or optional?
Perhaps playing Russian roulette should be legal for adults(?). But not when a child's life is at stake.
Indeed, Alexander. And by posthuman lights, we're all toddlers. Toddlers need their interests protecting - not their non-existent metaphysical freedom. I suspect superintelligence would not euthanase humans, but "uplift" us - which depending on one's theory of personal identity, amounts to the same...
“Humanity is the sin of God”, said Theodore Parker; but perhaps we might regard Jesus as a proto-transhumanist. Thanks Alexander, I shall investigate...
* * *
Thanks Jeffrey! Yes, small steps on the road to global veganism. Eventually, we're going to realise that the urge to eat each other is a dangerous psychosis. I think our goal should be to combine utopian technology with utopian ethics:
Hence the transhumanist commitment to the well-being of sentience - hopefully without the need to sweep the ground before one's feet before walking. Alas adopting, e.g. the Noble Eightfold Path is not going to recalibrate the hedonic treadmill or dismantle the horrors of the food chain:
But Buddhists are correct, I think, to claim that overcoming suffering should be our primary goal. Everything else is the icing on the cake. With the possible exception of surprise, I think happiness and love are the only Darwinian emotions worth conserving. Alas the emotional palette of superhappy posthumans is beyond human imagination.
* * *
Good knockabout stuff:
Alas our friendly critic does not understand hedonic recalibration!
* * *
An organic Singularity?
* * *
Might we phase out the biology of suffering and recreate it in the guise of an ancestor simulation?
("Abolition and the Simulation Argument")
If the Simulation Hypothesis is correct, then our Simulator would seem satanic rather infinitely good.
Actually, as you know, I think we're living in god-forsaken basement reality. True Alexander, "Satanism" is commonly misunderstood. Secular transhumanists - and certainly abolitionist transhumanists - sometimes borrow the language of "Heaven" and "Paradise" to evoke our glorious posthuman future. By contrast, to rely on the lexicon of Satanism would pose an almighty challenge even to a world-class corporate branding strategist.
* * *
Nick's colleague at the FHI, hyperthymic transhumanist Anders Sandberg ("I do have a ridiculously high hedonic set point")
("Anders Sandberg Enhancement Talk at the Oxford Positive Philosophy Seminar, Q&A")
I hope Anders will have his genome sequenced and become a professional sperm donor. The person asking the initial question in the video above is Toby, another FHI regular with a pretty high hedonic set-point too I believe.
Anders is always bubbly. It's not technically harder to design an organic robot animated by gradients of bliss rather than gradients of discontent. But such people are rarer - which gives us clues to the nature of life in the "ancestral environment of adaptedness" on the African savannah.
* * *
("Re: Negative-leaning utilitarianism as classical utilitarianism")
Although (depending on one's solution to the binding problem) it's quite possible individual mental "frames" physically subsist only for subpicisecond intervals, each here-and-now has a much greater phenomenal spatiotemporal depth. There are examples of individual mental states so terrible that one would erase the world to end them. So though one can say "Yes" to undergoing them beforehand, the outcome would always be "No".
So stepping back, what is the optimally de-biased hedonic state to evaluate the merits of negative, classical and positive utilitarianism? Should ideal respondents be above or below hedonic zero - or at it? The question of bias may not seem directly relevant; but it's notable that the answers we give are more than usually obliquely autobiographical.
Whatever the right answer, I hope the prospect of phasing out the biology suffering doesn't get tied by association to signing up to an ethic of negative utilitarianism - or indeed utilitarianism of any kind. [This is one reason I soft-pedal my own negative utilitarianism.] Life based globally on gradients of superAnders-like well-being - and superAnders-like intelligence - would be unrecognisably richer than the status quo. But hedonic recalibration doesn't involve your giving up anything you value - unless your core values entail the preservation of misery.
* * *
Smart Drugs?
("Episode 9: Philosopher David Pearce Talks Transhumanism - Smart Drug Smarts")
I was talking via Skype to Ho Chi Minh City / Siagon; but hopefully there aren't too many technical glitches.
2 x 250mg resveratrol daily Sebastian. But I'm unclear what is the optimal dose. Also, resveratrol has (weak) MAO-inhibiting properties, too, the mild mood-elevating effects of which may contribute to its popularity as a supplement.
* * *
Countdown to eternal youth? Not quite yet... ("Anti-aging drug breakthrough")
Yes, that will take some serious pill-popping. My guess is that Jeanne Calment's record is safe until the 2030s; but you wouldn't guess so from reading the Daily Mail.
("New drug being developed using compound found in red wine 'could help humans live until they are 150'")
* * *
Digital nirvana or resurrection of the flesh?
("Academics at Oxford University pay to be cryogenically preserved so they can be 'brought back to life in the future'")
"It is a glorious thing to be indifferent to suffering, but only to one's own suffering."
(Robert Lynd)
H+ Philosophers
by Hank Pellissier, Ethical Technology
* * *
(Professor Daniel Gilbert,
Department of Psychology, Harvard University)
A Brazilian-Portuguese translation by Gabriel Garmendia of "An Information-Theoretic Perspective on Heaven":
A vida distante do norte: Uma perspectiva teórico-informativa sobre o paraíso
and the grim Suffering in the Multiverse
Ética Quântica? Sofrimento no Multiverso
Cinco Razões Pelas Quais o Transhumanismo é Capaz de Eliminar o Sofrimento
Instituto Humanitas Unisinos, Brazil, Janeiro de 2011
Entrevista com David Pearce
* * *
A Welfare State for Elephants
The cost? Perhaps between two and three billion dollars.
* * *
TEDxDelMar: Envisioning Transhumanity
San Diego Transhumanists
* * *
An unexpected surprise from an enthusiastic Greek abolitionist:
Το Πρόταγμα της Κατάργησης του Πόνου
* * *
What Is Empathetic Superintelligence?
* * *
When will be the world's last unpleasant experience in our forward light-cone?
Stanford Transhumanist Association: The Abolitionist Project
* * *
Try everything once?
Some experiences are best saved until one's deathbed...
Qual é a melhor hora para consumir crack/cocaína?
* * *
A worthy tract on moral philosophy? Not exactly...
Interview in Leisure Only with David Pearce.
* * *
But when?
Five Reasons Transhumanism Can Abolish Suffering
* * *
A suitably Swedish consensus beckons...(?)
Open Questions for Transhumanism
* * *
(Karl Popper)
Can we really phase out the biology of suffering?
Transhumanism 2011
Interviewer Aron Vallinder, Manniska Plus.
* * *
AR Zone Podcast
Should our interventions in Nature be based on an ideology of conservation biology or compassion?
* * *
The admirable Pablo Stafforini has comprehensively updated the Spanish translation of The Abolitionist Project.
El Proyecto Abolicionista
* * *
Technological Singularities, Intelligence Explosions & The Future of Biological Sentience
Extended abstract of invited contribution to "The Singularity Hypothesis" (Springer, 2012, forthcoming)
* * *
Manniska Plus seminar:
Open Questions for Transhumanism
* * *
The Biointelligence Explosion (preprint)
Humans and Intelligent Machines
Co-Evolution, Fusion or Replacement?
* * *
The Institute for Ethics and Emerging Technologies (IEET) are running in four parts this plea for a discipline of compassionate biology to replace conservation biology.
The Problem of Predation
My personal sympathies lie closer to a (less provocatively expressed) version of Robert Wiblin's
Why improve nature when destroying it is so much easier?
rather than the costly, complicated and technically challenging project described in Reprogramming Predators. But either way, it's impossible to reconcile maintaining the biological status quo with a compassionate ethic of harm-reduction.
* * *
[on the webmaster's Reddit AMA]
I am now doing an AMA on Reddit. A poster asks me to add a comment here to prove my identity - a challenge at the best of times...
David Pearce Reddit AMA
The Garden of Eden
David Pearce (2014)
FB 2017
FB 2016
FB 2015
FB 2014
Talks 2015
Quora Answers
LessWrong 2013
Some Interviews
The Abolitionist Project
Social Network Postings (2017)
Can Science Abolish Suffering? (2013)
Hedonistic Imperative Facebook Group Posts |
7087ef2154cd878a | Rectangular potential barrier
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Square potential.png
Although a particle hypothetically behaving as a point mass would be reflected, a particle actually behaving as a matter wave has a finite probability that it will penetrate the barrier and continue its travel as a wave on the other side. In classical wave-physics, this effect is known as evanescent wave coupling. The likelihood that the particle will pass through the barrier is given by the transmission coefficient, whereas the likelihood that it is reflected is given by the reflection coefficient. Schrödinger's wave-equation allows these coefficients to be calculated.
Scattering at a finite potential barrier of height . The amplitudes and direction of left and right moving waves are indicated. In red, those waves used for the derivation of the reflection and transmission amplitude. for this illustration.
The time-independent Schrödinger equation for the wave function reads
where is the Hamiltonian, is the (reduced) Planck constant, is the mass, the energy of the particle and
is the barrier potential with height and width .
is the Heaviside step function.
The barrier is positioned between and . The barrier can be shifted to any position without changing the results. The first term in the Hamiltonian, is the kinetic energy.
The barrier divides the space in three parts (). In any of these parts, the potential is constant, meaning that the particle is quasi-free, and the solution of the Schrödinger equation can be written as a superposition of left and right moving waves (see free particle). If
where the wave numbers are related to the energy via
The index r/l on the coefficients A and B denotes the direction of the velocity vector. Note that, if the energy of the particle is below the barrier height, becomes imaginary and the wave function is exponentially decaying within the barrier. Nevertheless, we keep the notation r/l even though the waves are not propagating anymore in this case. Here we assumed . The case is treated below.
The coefficients have to be found from the boundary conditions of the wave function at and . The wave function and its derivative have to be continuous everywhere, so.
Inserting the wave functions, the boundary conditions give the following restrictions on the coefficients
E = V0[edit]
If the energy equals the barrier height, the solutions of the Schrödinger equation in the barrier region are not exponentials anymore but linear functions of the space coordinate
The complete solution of the Schrödinger equation is found in the same way as above by matching wave functions and their derivatives at and . That results in the following restrictions on the coefficients:
Transmission and reflection[edit]
At this point, it is instructive to compare the situation to the classical case. In both cases, the particle behaves as a free particle outside of the barrier region. A classical particle with energy larger than the barrier height would always pass the barrier, and a classical particle with incident on the barrier would always get reflected.
To study the quantum case, consider the following situation: a particle incident on the barrier from the left side (). It may be reflected () or transmitted ().
To find the amplitudes for reflection and transmission for incidence from the left, we put in the above equations (incoming particle), (reflection), =0 (no incoming particle from the right), and (transmission). We then eliminate the coefficients from the equation and solve for and .
The result is:
Due to the mirror symmetry of the model, the amplitudes for incidence from the right are the same as those from the left. Note that these expressions hold for any energy .
Analysis of the obtained expressions[edit]
E < V0[edit]
Transmission probability of a finite potential barrier for . Dashed: classical result. Solid line: quantum mechanics.
The surprising result is that for energies less than the barrier height, there is a non-zero probability
for the particle to be transmitted through the barrier, being . This effect, which differs from the classical case, is called quantum tunneling. The transmission is exponentially suppressed with the barrier width, which can be understood from the functional form of the wave function: Outside of the barrier it oscillates with wave vector , whereas within the barrier it is exponentially damped over a distance . If the barrier is much larger than this decay length, the left and right part are virtually independent and tunneling as a consequence is suppressed.
E > V0[edit]
In this case
Equally surprising is that for energies larger than the barrier height, , the particle may be reflected from the barrier with a non-zero probability
This reflection probability is in fact oscillating with and only in the limit approaches the classical result , no reflection. Note that the probabilities and amplitudes as written are for any energy (above/below) the barrier height.
E = V0[edit]
The transmission probability at evaluates to
Remarks and applications[edit]
The calculation presented above may at first seem unrealistic and hardly useful. However it has proved to be a suitable model for a variety of real-life systems. One such example are interfaces between two conducting materials. In the bulk of the materials, the motion of the electrons is quasi-free and can be described by the kinetic term in the above Hamiltonian with an effective mass . Often the surfaces of such materials are covered with oxide layers or are not ideal for other reasons. This thin, non-conducting layer may then be modeled by a barrier potential as above. Electrons may then tunnel from one material to the other giving rise to a current.
The operation of a scanning tunneling microscope (STM) relies on this tunneling effect. In that case, the barrier is due to the gap between the tip of the STM and the underlying object. Since the tunnel current depends exponentially on the barrier width, this device is extremely sensitive to height variations on the examined sample.
The above model is one-dimensional, while space is three-dimensional. One should solve the Schrödinger equation in three dimensions. On the other hand, many systems only change along one coordinate direction and are translationally invariant along the others; they are separable. The Schrödinger equation may then be reduced to the case considered here by an ansatz for the wave function of the type: .
For another, related model of a barrier, see Delta potential barrier (QM), which can be regarded as a special case of the finite potential barrier. All results from this article immediately apply to the delta potential barrier's taking the limits while keeping constant.
See also[edit]
• Quantum mechanics. Wiley-Interscience: Wiley. 1996. pp. 231–233. ISBN 978-0-471-56952-7. |first1= missing |last1= in Authors list (help)
External links[edit] |
8e0f43747e9a9fc9 | In mathematics and physics, a soliton is a self-reinforcing solitary wave (a wave packet or pulse) that maintains its shape while it travels at constant speed. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. "Dispersive effects" refer to dispersion relations between the frequency and the speed of the waves. Solitons arise as the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described by John Scott Russell (1808–1882) who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation".
A single, consensus definition of a soliton is difficult to find. Drazin and Johnson (1989) ascribe 3 properties to solitons:
1. They are of permanent form;
2. They are localised within a region;
More formal definitions exist, but they require substantial mathematics. Moreover, some scientists use the term soliton for phenomena that do not quite have these three properties (for instance, the 'light bullets' of nonlinear optics are often called solitons despite losing energy during interaction).
Many exactly solvable models have soliton solutions, including the Korteweg–de Vries equation, the nonlinear Schrödinger equation, the coupled nonlinear Schrödinger equation, and the sine-Gordon equation. The soliton solutions are typically obtained by means of the inverse scattering transform and owe their stability to the integrability of the field equations. The mathematical theory of these equations is a broad and very active field of mathematical research.
Some types of tidal bore, a wave phenomenon of a few rivers including the River Severn, are 'undular': a wavefront followed by a train of solitons. Other solitons occur as the undersea internal waves, initiated by seabed topography, that propagate on the oceanic pycnocline. Atmospheric solitons also exist, such as the Morning Glory Cloud of the Gulf of Carpentaria, where pressure solitons travelling in a temperature inversion layer produce vast linear roll clouds. The recent and not widely accepted soliton model in neuroscience proposes to explain the signal conduction within neurons as pressure solitons.
A topological soliton, or topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution." Soliton stability is due to topological constraints, rather than integrability of the field equations. The constraints arise almost always because the differential equations must obey a set of boundary conditions, and the boundary has a non-trivial homotopy group, preserved by the differential equations. Thus, the differential equation solutions can be classified into homotopy classes. There is no continuous transformation that will map a solution in one homotopy class to another. The solutions are truly distinct, and maintain their integrity, even in the face of extremely powerful forces. Examples of topological solitons include the screw dislocation in a crystalline lattice, the Dirac string and the magnetic monopole in electromagnetism, the Skyrmion and the Wess-Zumino-Witten model in quantum field theory, and cosmic strings and domain walls in cosmology.
In 1834, John Scott Russell describes his wave of translation. The discovery is described here in Russell's own words:
(Note: This passage has been repeated in many papers and books on soliton theory.)
(Note: "Translation" here means that there is real mass transport such that water can be transported from one end of the canal to the other end by this "Wave of Translation". Usually there is no real mass transport from one side to another side for ordinary waves.)
Russell spent some time making practical and theoretical investigations of these waves, he built wave tanks at his home and noticed some key properties:
Russell's experimental work seemed at odds with the Isaac Newton and Daniel Bernoulli's theories of hydrodynamics. George Biddell Airy and George Gabriel Stokes had difficulty accepting Russell's experimental observations because they could not be explained by linear water wave theory. His contemporaries spent some time attempting to extend the theory but it would take until 1895 before Diederik Korteweg and Gustav de Vries provided the theoretical explanation.
(Note: Lord Rayleigh published a paper in Philosophical Magazine in 1876 to support John Scott Russell's experimental observation with his mathematical theory. In his 1876 paper, Lord Rayleigh mentioned Russell's name and also admitted that the first theoretical treatment was by Joseph Valentin Boussinesq in 1871. Joseph Boussinesq mentioned Russell's name in his 1871 paper. Thus Russell's observations on solitons were accepted as true by some prominent scientists within his own life time of 1808-1882. Korteweg and de Vries did not mention John Scott Russell's name at all in their 1895 paper but they did quote Boussinesq's paper in 1871 and Lord Rayleigh's paper in 1876. The paper by Korteweg and de Vries in 1895 was not the first theoretical treatment of this subject but it was a very important milestone in the history of the development of soliton theory.)
In 1965 Norman Zabusky of Bell Labs and Martin Kruskal of Princeton University first demonstrated soliton behaviour in media subject to the Korteweg–de Vries equation (KdV equation) in a computational investigation using a finite difference approach.
In 1967, Gardner, Greene, Kruskal and Miura discovered an inverse scattering transform enabling analytical solution of the KdV equation. The work of Peter Lax on Lax pairs and the Lax equation has since extended this to solution of many related soliton-generating systems.
Solitons in fiber optics
See also Soliton (optics)
Much experimentation has been done using solitons in fiber optics applications. Solitons' inherent stability make long-distance transmission possible without the use of repeaters, and could potentially double transmission capacity as well.
In 1973, Akira Hasegawa of AT&T Bell Labs was the first to suggest that solitons could exist in optical fibers, due to a balance between self-phase modulation and anomalous dispersion. Also in 1973 Robin Bullough made the first mathematical report of the existence of optical solitons. He also proposed the idea of a soliton-based transmission system to increase performance of optical telecommunications.
Solitons in a fiber optic system are described by the Manakov equations.
In 1987, P. Emplit, J.P. Hamaide, F. Reynaud, C. Froehly and A. Barthelemy, from the Universities of Brussels and Limoges, made the first experimental observation of the propagation of a dark soliton, in an optical fiber.
In 1988, Linn Mollenauer and his team transmitted soliton pulses over 4,000 kilometers using a phenomenon called the Raman effect, named for the Indian scientist Sir C. V. Raman who first described it in the 1920s, to provide optical gain in the fiber.
In 1991, a Bell Labs research team transmitted solitons error-free at 2.5 gigabits per second over more than 14,000 kilometers, using erbium optical fiber amplifiers (spliced-in segments of optical fiber containing the rare earth element erbium). Pump lasers, coupled to the optical amplifiers, activate the erbium, which energizes the light pulses.
In 1998, Thierry Georges and his team at France Telecom R&D Center, combining optical solitons of different wavelengths (wavelength division multiplexing), demonstrated a data transmission of 1 terabit per second (1,000,000,000,000 units of information per second).
For some reasons, it is possible to observe both positive and negative solitons in optic fibre. However, usually only positive solitons are observed for water waves.(what is the meaning of positive and negative solitons ???)
Solitons in magnets
In magnets, there also exist different types solitons and other nonlinear waves. These magnetic solitons are an exact solution of classical nonlinear differential equations - magnetic equations, e.g. the Landau-Lifshitz equation, continuum Heisenberg model, Ishimori equation, Mikhailov-Yaremchuk equation, nonlinear Schrodinger equation and so on.
The bound state of two solitons is known as a bion.
In field theory Bion usually refers to the solution of the Born-Infeld model. The name appears to have been coined by G.W.Gibbons in order to distinguish this solution from the conventional soliton, understood as a regular, finite-energy (and usually stable) solution of a differential equation describing some physical system. The word regular means a smooth solution carrying no sources at all. However, the solution of the Born-Infeld model still carries a source in the form of a Dirac-delta function at the origin. As a consequence it displays a singularity in this point (although the electric field is everywhere regular). In some physical contexts (for instance string theory) this feature can be important, which motivated the introduction of a special name for this class of solitons.
On the other hand, when gravity is added (i.e. when considering the coupling of the Born-Infeld model to General Relativity) the corresponding solution is called EBIon, where "E" stands for "Einstein".
See also
• N. J. Zabusky and M. D. Kruskal (1965). Interaction of 'Solitons' in a Collisionless Plasma and the Recurrence of Initial States. Phys Rev Lett 15, 240
• A. Hasegawa and F. Tappert (1973). Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers. I. Anomalous dispersion. Appl. Phys. Lett. Volume 23, Issue 3, pp. 142-144.
• P. Emplit, J.P. Hamaide, F. Reynaud, C. Froehly and A. Barthelemy (1987) Picosecond steps and dark pulses through nonlinear single mode fibers. Optics. Comm. 62, 374
• P. G. Drazin and R. S. Johnson (1989). Solitons: an introduction. Cambridge University Press.
• N. Manton and P. Sutcliffe (2004). Topological solitons. Cambridge University Press.
• Linn F. Mollenauer and James P. Gordon (2006). Solitons in optical fibers. Elsevier Academic Press.
• R. Rajaraman (1982). Solitons and instantons. North-Holland.
External links
Search another word or see Solitonon Dictionary | Thesaurus |Spanish
Copyright © 2014, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature |
f6fa7160e653f9f1 | About this Journal Submit a Manuscript Table of Contents
Journal of Mathematics
Volume 2013 (2013), Article ID 520214, 105 pages
Research Article
Two Parameters Deformations of Ninth Peregrine Breather Solution of the NLS Equation and Multi-Rogue Waves
Received 17 November 2012; Accepted 8 February 2013
Academic Editor: S. T. Ali
This paper is a continuation of a recent paper on the solutions of the focusing NLS equation. The representation in terms of a quotient of two determinants gives a very efficient method of determination of famous Peregrine breathers and their deformations. Here we construct Peregrine breathers of order and multi-rogue waves associated by deformation of parameters. The analytical expression corresponding to Peregrine breather is completely given.
1. Introduction
From the fundamental work of Zakharov and Shabat in 1972 who solved the nonlinear Schrödinger equation (NLS) using the inverse scattering method, a lot of studies have been carried out on this equation. Its and Kotlyarov studied the case of periodic and almost periodic algebrogeometric solutions to the focusing NLS equation and constructed these solutions in 1976 [1]. Peregrine constructed the first quasi-rational solutions of NLS equation in 1983, nowadays called worldwide Peregrine breathers. In 1985, Akhmediev et al. obtained the two-phase almost periodic solution to the NLS equation and obtained the first higher order analogue of the Peregrine breather [2]. Other families of higher order were constructed in a series of articles by Akhmediev et al. [3, 4] using Darboux transformations.
In 2010, it has been shown in [5] that rational solutions of the NLS equation can be written as a quotient of two Wronskians. Recently, in [6] a new representation of the solutions of the NLS equation has been constructed in terms of a ratio of two Wronskian determinants of even order composed of elementary functions; the related solutions of NLS are of order . When we perform the passage to the limit when some parameter tends to , we got families of multi-rogue wave solutions of the focusing NLS equation depending on a certain number of parameters. It allows to recognize the famous Peregrine breather [7] and also higher order Peregrine’s breathers constructed by Akhmediev et al. [3, 8].
Recently, another representation of the solutions of the focusing NLS equation, as a ratio of two determinants, has been given in [9] using generalized Darboux transform.
A new approach has been done in [10], which gives a determinant representation of solutions of the focusing NLS equation, obtained from Hirota bilinear method, derived by reduction of the Gram determinant representation for Davey-Stewartson system.
Here, we construct the breather of order , which shows the efficiency of this method.
2. Expression of Solutions of NLS Equation in terms of Wronskian Determinant and Quasi-Rational Limit
2.1. Solutions of the NLS Equation in terms of Functions
The solution of the NLS equation is given in terms of truncated theta function by (see [11]) where In this formula, , , , and are functions of the parameters , ; they are defined by the formulas The parameters , , are real numbers such that Condition (5) implies that Complex numbers are defined in the following way: , , are arbitrary real numbers.
2.2. Relation between and Fredholm Determinant
The function defined in (3) can be rewritten with a summation in terms of subsets of , We choose in formula (3) as for , and for .
Let be the unit matrix and the matrix defined by Then has the following form: From the beginning of this section, has the same expression as in (10), so we have clearly the equality Then the solution of NLS equation takes the form
2.3. Link between Fredholm Determinants and Wronskians
We consider the following functions: We use the following notations: is the wronskian .
We consider matrix defined by Then we have the following statement.
Theorem 1. Consider where
Proof. We start to remove factor in each row in the wronskian for .
Then with The determinant can be written as where , , and , , , , , and , , .
Denoting , , the determinant of is clearly equal to
Then we use the following lemma.
Lemma 2. Let , , and , the matrix formed by replacing the th row of by the th row of . Then
Proof. For , the transposed matrix in the cofactors of , we have the well-known formula .
So it is clear that .
The general term of the product can be written as We get Thus, .
According to the relation (22) of the previous lemma, we get where is the matrix formed by replacing the th row of by the th row of defined previously.
We compute and we get We can simplify the quotient So can be expressed as Then dividing each column by , , and multiplying each row by , , we get and therefore the wronskian can be written as It follows that So, the solution of NLS equation takes the form
2.4. Wronskian Representation of Solutions of NLS Equation
From the previous section, we get the following result.
Theorem 3. Function defined by is a smooth solution of the focusing NLS equation depending on two real parameters, and .
2.5. Quasi-Rational Solutions of NLS Equation in terms of a Limit of a Ratio of Wronskian Determinants
In the following, we take the limit when the parameters for and for .
For simplicity, we denote the term by .
We consider the parameter written in the form When goes to , we realize limited expansions at order , for , of the terms The parameters and , for , are chosen in the form Then we have the following result.
Theorem 4. With the parameters defined by (35), and chosen as in (37), for , the function defined by is a quasi-rational solution of the NLS equation (1) depending on two parameters.
3. Quasi-Rational Solutions of Order 9
Wa have already constructed in [6] solutions for the cases until , and this method gives the same results. We do not reproduce it here. We only give solutions of NLS equation in the case .
Because of the length of the expressions of polynomials and in the solutions of the NLS equation defined by we only give them in the appendix. In the following cases, we only give the plots for the modulus of in the coordinates.
For , , we obtain Akhmediev’s breather; we get the expected amplitude of for the spike (Figure 1).
Figure 1: Solution of NLS, , , .
If we choose , , we obtain Figure 2.
Figure 2: Solution of NLS, , , .
If we choose , , we have Figure 3.
Figure 3: Solution of NLS, , , .
It can be noted that Figures 2 and 3 are closely analogous with Figure 2(b) in paper [12] of Kedziora et al. In that work ( ), it was pointed out that the shift (here corresponding to and nonzero) pulls out a ring of fundamental rogue elements, corresponding to 15 of them there and to 17 here. It leaves behind a rogue wave of order , that is, 6 there (amplitude = 13) and 7 here (amplitude = 15). Of course, Figure 1 here is analogous with Figure 2(a) there (amplitudes 19 and 17, resp.).
4. Conclusion
The method used in the present paper provides a powerful tool as the explicit analytical formulation of the ninth order shows it. To my knowledge, it is the first time that the analytical expression of the Peregrine breather of order nine is presented.
It confirms the conjecture about the shape of the breather in the coordinates, the maximum of amplitude equal to , and the degree of polynomials in and here equal to . For and nonzero, the maximum is less than that, as discussed above and seen in Figures 2 and 3.
In the following, we choose all the parameters and equal to ; here .
The solution of NLS equation takes the form with |
2cb07592fc16a83d | Take the 2-minute tour ×
This answer of mine has been strongly criticized on the ground that it is no more than a philosophical blabbering. Well, it may well be. But people seem to be of the opinion that HUP alone does not ensure randomness and you need Bell's theorem and other features for the randomness in QM. However, I still believe it is the HUP which is all one needs to appreciate the probabilistic feature of QM. Bell's theorem or other such results reinforces this probabilistic view only. I am very much curious to know the right answer.
share|improve this question
Asking a separate question instead of abusing the comments is a very good idea! – Sklivvz Apr 9 '11 at 10:17
@Sklivvz: except that this question is not interested in the past discussion (as per @sb1's comment to my answer) so the whole talk about Bell's theorem (which is just misunderstanding on sb1's part anyway) shouldn't be present in this question at all. – Marek Apr 9 '11 at 10:29
I'm appalled that so many people piled on your previous Answer without leaving any comments. Qudos to Marek for having left a comment, however the part of his comment that I agree with is that your Answer was not much to the point. It may be that the other downvoters felt that you didn't Answer the Question, not that it was philosophical blathering. I didn't downvote your Answer, but nor did I upvote it. – Peter Morgan Apr 9 '11 at 13:03
@Peter: No, there were comments exchanged which were not quite friendly. I guess the moderator has removed them all except the first comment. – user1355 Apr 9 '11 at 13:26
I am totally clueless about the above comment made by @Marek. He seems to assume a lot of things which makes one quite surprised and detested! – user1355 Apr 9 '11 at 13:30
show 7 more comments
6 Answers 6
I'm not sure if an undergrads perspective would be useful here- but I'll give it a shot (at worst I'll learn something new.)
David Griffith's "Introduction to Quantum Mechanics" takes great care to motivate the uncertainty principle from more basic founding postulates of Q.M. First Hilbert space and the state vector, as the description of the particle, are defined. Next classical observables are formulated as operators on the state vector. Eigenvalues and the basis of the operators are explored and it is revealed that for certain (conjugate) operators, the state vector cannot be written in the same basis if a unique value for those operator's corresponding observable is desired. It is shown that such operators do not commute. It is finally shown that from non-commuting the uncertainty principle can be mathematically derived.
So the point of this summary (all of which I'm sure you already know well) is the order that things are done. Griffiths is so far my favorite text book author, and I'm sure there is a reason he laid things out so explicitly. He stresses the classical nature of the observables and how the state vector is truly fundamental. It always seemed to me (and thus how I understand it) that what he was getting at is that observables like position and momentum are classical and what we are doing is trying to perform classical observation on a quantum system. When we attempt to do this, we are putting limitations on the state vector that nature simply doesn't do on her own. The result of this is that we end up with non-comeasurable observables, simply because of our classical bias in "translating" the true state of the particle, which is simply not completely expressible solely in terms of classical observables. To me this, what Q.M. is actually doing, seems more fundamental than the HUP. Perhaps it borders on metaphysics- but it seems to be the logical conclusion of the math/algorithms.
And because Bell's Theorem is mentioned: the inputs for this theorem are already there in Q.M.- the theorem simply tells us how to properly combine them and then conclude the character of the correlations between observables. In a way (once again seeming to me) it "measures" what kind of probabilities we are expressing in our theory.
share|improve this answer
It's true that the uncertainty principle is derived, but what you say in your third paragraph doesn't make much sense. There's not really anything classical about observables. In fact, they act very non-classically since they have nontrivial commutation relations with other things. Observables are operators on the Hilbert space of states, and "project" out (in some sense) the information contained in the state vector you're looking for. The "classical" things are more related to expectation values, not the operators. I think Griffiths discuses this somewhere in the exercises. – Mr X Apr 10 '11 at 14:48
@Jeremy Price what I was getting at is that things like "momentum" and "position" are not true quantum mechanical properties- rather they are classical measures that we apply to the quantum world. But it is a true interpretation about the expectation values, from what I've read. Which is the root of the uncertainty principle, is it not- as the uncertainty of an observable is expressed as a deviation from the expectation value? (In Griffith's derivation at least) – jaskey13 Apr 10 '11 at 15:16
@jaskey13 I don't think it's right to say that about momentum and position. They're very real things quantum mechanically, we still have quantum mechanical analogues of, e.g., conservation of momentum and energy, despite the fact that they are not "well-defined" in a classical sense. In fact, if you look at how to derive the Schroedinger equation, you replace operators into E = p^2/2m and act this on a function, usually as a function of position, which is surely taking all of these properties very seriously and fundamentally! – Mr X Apr 12 '11 at 16:55
@Jeremy Price Are these analogues those that come from an application of Ehrenfest's theorem? If not- could you please tell me what they are? – jaskey13 Apr 12 '11 at 21:48
My edition is surprisingly scant when it comes to conservation laws. Maybe it is time to move on to something more advanced – jaskey13 Apr 12 '11 at 22:56
show 4 more comments
It's very strange for someone to say that "Bell's theorem ensures something in quantum mechanics". Bell's theorem is a theorem - something that can be mathematically proved to hold given the assumptions. It's valid in the same sense as $1+1=2$. Is $1+1=2$ needed for something in physics? Maybe - but the question clearly makes no sense. Mathematics is always valid in physics - and everywhere else.
However, even the assumptions of Bell's theorem surely can't be "necessary building blocks" for some results in quantum mechanics because Bell's theorem is not a theorem about quantum mechanics at all. It is a theorem (an inequality) about local realist theories - exactly the kind of theories that quantum mechanics is not. Whether someone needs $1+1=2$ doesn't matter because this fact is imposed upon him, anyway. Any proof may be modified so that $1+1=2$ is needed and any proof may be modified so that $1+1=2$ is not needed.
But even if one ignores the comment about "Bell's theorem and other such results" that can't possibly have anything to do with the question, it's nontrivial to make the question precise. The uncertainty principle is normally formulated as a part of quantum mechanics - we say that $\Delta x$ and $\Delta p$ can't have well-defined sharp values at the same moment. What it means for them not to have sharp values? Well, obviously, it means that one measures their values with an error margin, and the fluctuations or choice of the measured value from the allowed distribution has to be random.
If it were not random, there would have to be another quantity for which one should do the same discussion. Again, if the uncertainty principle applied to this hidden variables (and a complementary one), it would imply that its values have to be random. Do you allow me to assume that the HUP holds for whatever variables we have? If you do, obviously, there has to be random things in the Universe.
But even the term "random" is too ill-defined. Do you require some special vanishing of correlations etc.? If you do, shouldn't you describe what those requirements are?
So I don't think it's possible to fully answer vague questions of this kind. I would say a related comment that quantum mechanics - with its random character - is the only mathematically possible and self-consistent framework that is compatible with certain basic observations of the quantum phenomena. The outcomes in quantum mechanics take place randomly, with probabilities and probability distributions that can be calculated from the squared probability amplitudes, and all other attempts to modify the basic framework of quantum mechanics have been ruled out.
If it's so, and it is so, there's really no point in trying to decompose the postulates of quantum mechanics into pieces because the pieces only combine into a viable theoretical structure, able to explain the behavior of important worlds such as ours, when all these postulates are taken seriously at the same moment.
share|improve this answer
add comment
In my opinion HUP is not a "principle" but a consequence of the mathematical framework of QM - it is derived rather than "postulated".
Randomness or uncertainty in measuring some variable in some state is not strictly related to the uncertainty of its canonically conjugate variable. HUP establishes some limitation on them and that's it. What I want to underline is that, say, uncertainty in momentum is determined with the given QM state itself.
About randomness, it is easy to understand if we remember that the information is gathered with help of photons. When the number of photons in one "observation" is large, their average is well determined and it is what the classical physics deals with. If the number of photons is small, the uncertainty makes an impression of a strong randomness in measuring, say, position of a body. Even the Moon position is uncertain if based on few-photon measurements.
Uncertainty in measurements is a fundamental feature of states in physics. Determinism is possible only for "well-averaged" measurements. Look at the Ehrenfest equations - they involve average (expectation) values. It implies many-many measurements. In other words, the classical determinism is due to its inclusive character.
share|improve this answer
add comment
Well, you misinterpreted what I (and others) said at least in two important ways.
1. Bell's theorem surely isn't responsible for randomness in QM. That's because it doesn't actually tell you anything about QM itself, only about other theories trying to reproduce the same results that QM (and nature) produces. The reason I mentioned it is that it (severely) restricts the class of non-random theories that can describe the nature. Without such a theorem one might hope (and people still do) that it is possible to construct a deterministic framework that could be compatible with observations. So HUP certainly doesn't imply intrinsic randomness. You need further work to establish that no viable theory (and not just QM) is deterministic. Measurement of violation of Bell's inequalities is what does it (at least if one assumes locality).
2. QM is based on lots of principles. HUP is fundamental (and is built-in by including non-commutative operators into the framework) but no less fundamental than other postulates. Trying to isolate one particular feature of a theory doesn't always make sense. You could try to obtain deterministic QM by removing HUP but that essentially means letting $\hbar \to 0$ and obtaining classical physics, thereby losing all the other special effects of QM.
In other words, your statement "HUP which is all one needs to appreciate the probabilistic feature of QM" couldn't be more far removed from reality. To appreciate this probabilistic aspect, one needs to master the mathematical formalism of QM, the way it connects to experiment and the way measurements are interpreted. HUP is only a small part in it and actually, the one thing you almost never care for as it is built-in into the theory from the start.
share|improve this answer
You have misunderstood my point as well. I am well aware of all the fundamental postulates of QM. You need all those postulates for a fully functional Q.T. However, my point is UP is the postulate for the essential qualitative element of the randomness in the theory. – user1355 Apr 9 '11 at 8:43
@sb1: that might be the case but you start your question with "people seem to be of the opinion that..." which is simply not the case. People talked about something completely different last time so I am not sure why you bring that in now if you only intend to give downvotes for people's replies. If you only want to talk about pure QM then I suggest you edit your question in order not to confuse people further. – Marek Apr 9 '11 at 8:49
@sb1 I think there's a Useful Answer in here (my +1). – Peter Morgan Apr 9 '11 at 13:37
@Peter: thank you. Well, I believe all I said is correct and relevant but whether I've read @sb1's mind correctly as to what his intents were with this question that's another story... – Marek Apr 9 '11 at 14:14
add comment
The title question is
Does the HUP alone ensure the randomness of QM?
I claim that the answer to this question is:No.
The HUP has the basic forms:
$$\Delta E.\Delta t \ge \hbar$$
$$\Delta x.\Delta p \ge \hbar$$
Furthermore quantum mechanics books prove that for non-commuting observables:
$$[P,Q] \neq 0$$
$$\Delta P.\Delta Q \ge \hbar$$
So the HUP is proven generally as a consequence of the non-commutativity of the observables. In order to understand why there are non-commuting observables in QM takes us to the rest of the postulates of QM and so explains why the other answers say that HUP is a consequence of QM in toto.
However there is more to the topic of "QM randomness" than this, and we have not yet responded to your remarks about Bell's Theorem.
The first point to note is that in classical engineering, there is a concept of time-domain and frequency domain (for a wave) and the associated law:
$$\Delta \omega.\Delta t \ge 1$$
This law is a consequence of the Fourier transform between these domains and therefore:
$$e^{-i\omega t}$$
So the HUP formula is more widespread than just quantum mechanics. Of course if one puts
$$E=\hbar \omega$$
then one obtains
once again!
So where does quantum randomness (assuming for the moment, that that is the correct term) come from?
One published book that makes this point explicitly is Roger Penrose "The Emperor's New Mind", p297
[In quantum collapse..] these real numbers play a role as actual probabilities for the alternatives in question. Only one of the alternatives survives into the actuality of physical experience.. It is here, and only here, that the non-determinism of quantum theory makes its entry.
The italics is mine (and this is where Penrose introduces his R definition for describing quantum wave function reduction). Thus if you are familiar with quantum mechanics then this is the reduction postulate (in words).
So we have several different concepts in play here: HUP, QM Postulates, Bell's Theorem, randomness.
share|improve this answer
I have no confusion about the fundamentals of quantum mechanics more than any body else here. Your engineering example is cute but wrong in the sense that the error in measurement in that case can be made arbitrarily small by more accurate instruments. Any theory, comes with a HUP like principle where the uncertainty can't be made arbitrarily small has to have probabilistic features. That's my understanding. – user1355 Apr 9 '11 at 16:02
@sb1 : I think the moral of how this site (has to) work is that if a question appears to be asking for a Textbook explanation of something that is what will be provided by default. If one means to challenge, or extend, the textbooks on an apparently basic topic (which Fundamental ones are) then the question formation needs to be referenced and so on. Phrases like "People believe.." dont convey exactly what was intended. So yes there will be misunderstandings about what was intended here. I worked on the Question title itself this time as my source for your meaning. Try again though. – Roy Simpson Apr 9 '11 at 21:18
@sb1, I think the "engineering example" Roy cites is relevant to this discussion, but I point out that it emerges in deterministic signal processing, that stochastic SP is not needed (which you both may know). I find that a good author on this issue in SP is Leon Cohen, who I think writes very clearly. His "Time-Frequency Distributions-A Review", PROCEEDINGS OF THE IEEE, VOL. 77, NO. 7, JULY 1989, Page 941, where he discusses the relationship between quantum and SP from 30 years experience, is a 40 pages that's well worth reading. – Peter Morgan Apr 9 '11 at 22:32
@sb1 The error in measurement in the SP case cannot be made arbitrarily small because the concept of measuring the amplitude of the signal at a given frequency requires measurement of the signal at all times, so that a perfect Fourier transform of the signal can be constructed. If we measure the signal for only a finite time, we can effectively only compute the Fourier components of the signal we want in convolution with a window function. – Peter Morgan Apr 9 '11 at 22:41
@Peter, thanks for the link. Actually this Answer is only part of a larger Answer I had developed for this question, which developed the point about the time-frequency domain (and another example) much further. But when I checked with the OP question I found that my conclusions had nothing much to do with the original question, so I truncated the answer to what the OP seemed to be asking. This Answer is now being downvoted probably because it doesnt address an ambiguous question, so I will probably delete it and not answer any more ambiguous questions of this type. – Roy Simpson Apr 10 '11 at 16:08
add comment
I think I'm largely going to repeat what Roy, Vladimir, and Jaskey13 have already said, but perhaps, I hope, not so totally that this won't be Useful.
I take it that HUP, despite its grandiose title, is not a principle; it's derived as a consequence of the various mathematical structures of QM. As such, HUP is a part of a characterization of the properties of QM. HUP is, however, something of a lesser part of that characterization because it is not enough to characterize all the differences between classical stochastic physics and QM. It is possible, as Roy says, to construct local classical models for which, under a reasonable physical interpretation of the mathematics, HUP is true.
I'm not completely sure what you mean by "HUP alone does not ensure randomness"? I suppose the interpretation of QM is all probability all the time. In various comments you protest, and I believe, that you know the axioms of QM and their basic interpretation well enough. What I take you to mean is that "HUP alone does not ensure intrinsic randomness". This qualification, which is fairly commonly used, makes sense, to me, of your following comment, with my qualification inserted, that "you need Bell's theorem and other features for the [intrinsic] randomness in QM", whereas the relevance of Bell inequalities to your Question seems to have troubled other people here.
I take “intrinsic” to be a rather coded way to say that a classical probability theory is not isomorphic to quantum probability theory. I've previously cited on Physics SE the presentations of Bell-CHSH inequalities that I think best make this clear, due to Landau and to de Muynck, here, where I note that you also left a notably Useful(8) Answer. Their derivations use the CCRs in a way that is not significantly more obscure than does the derivation of the HUP. I take the Bell-CHSH inequalities to be a reasonable lowest-order characterization of the difference. There is of course confusion concerning the relevance of locality to the Bell inequalities, which I think could get in the way of my discussion here, but I see that you have a relatively sophisticated view of that confusion.
share|improve this answer
UP can be derived from the Schrödinger equation and normally introductory text books derive it. But in advanced courses one learns that Schrödinger equation can be derived from the basic axioms of quantum theory. These axioms are held to be the most fundamental postulates about nature. These postulates directly leads us to the general uncertainty relationship. The catch here, imho, is UP encompasses the gist of the theory. In order to be a quantum theory all a theory needs is to be consistent with the UP. It is truly a fundamental principle of the QT in this sense. – user1355 Apr 10 '11 at 15:10
-1 is not mine. – user1355 Apr 10 '11 at 15:14
@sb1 Downvoting was all too likely for my Answer. Downvotes are meaningless unless someone is wise enough to be able to say why, at least for other readers, if not for the Answerer. Your idea that HUP is truly a principle, and enough to make a theory a quantum theory, seems to me quite radical. I think I don't see that in quantum logic or axiomatic approaches? It's often done from the CCRs, which give CHSH, etc. Is there a proof that a theory that satisfies HUP (and what other conditions?) must violate the Bell inequalities? Otherwise, what you're proposing seems rather different from QM. – Peter Morgan Apr 10 '11 at 17:23
add comment
Your Answer
|
fd8e53d90255f8ce | Take the 2-minute tour ×
I'm an aspiring physicist who wants to self study some Quantum Physics. My thirst for knowledge is unquenchable and I can not wait 2 more years until I get my first quantum physics class in university, so I want to start with a self study. I am enrolled in a grammar school and the most gifted among the gifted (not my description, mind you, I hate coming off as cocky, sorry) are enrolled in a special 'project'. We are allowed to take 3 school hours a week off in order to work on a project, which can be about anything you want, from music to mathematics. On the 4th of April we have to present our projects. Last year an acquaintance of mine did it about university level mathematics, so I thought, why not do it about university level physics? It is now the 3rd of October so I have half a year. My question is, where can I conduct a self study of quantum physics? Starting from scratch? And is it possible for me to be able to use and understand the Schrödinger equation by April? What are good books, sites, etc. that can help me? My goal is to have a working knowledge of BASIC quantum physics and I would like to understand and be able to use the Schrödinger equation. Is this possible? What is needed for these goals?
share|improve this question
Do you have any experience with linear algebra, calculus or differential equations? – DJBunk Oct 3 '12 at 15:58
None with linear algebra, but I do with calculus. – kamal Oct 3 '12 at 16:03
I would say it depends on how ambitious you are in general learning a subject, but I really doubt 3 hours a week will do it. With some effort you might be able to learn some neat qualitative things, but I highly doubt you will be solving the Schrodinger eqn etc by April. I suggest doing something more specific like learning about things like the double slit experiment and the photoelectric effect. Those types of things you can start with Wikipedia to see if it interests. Don't let me discourage you though! – DJBunk Oct 3 '12 at 16:14
Of course those 3 hours a week are only during school time, I expect to spend around ~10 hours a week for this, some weeks more and some less, but at least 10 hours, that I know. I already have a working knowledge of the double slit experiment and photoelectric effect, so I think I am ready for the next step (although I am not certain what that might be). – kamal Oct 3 '12 at 16:17
show 2 more comments
4 Answers
Just pick up Dirac's book "The Principles of Quantum Mechanics" and read it in conjunction with "The Feynman Lectures on Physics Vol III". Don't waste time with linear algebra, the entire content of the undergraduate courses can be learned in half a day. Don't worry about the infinite dimensional nature of the thing, just reduce all the spaces to finite dimensions.
Also, be aware that "gifted" is a political label that has nothing to do with you, it's just a way for schools to segregate students by their future social class. It's not the analog of special needs, because the students in gifted classes are no different from the students in usual classes, except that they are given a slightly better education. Don't be fooled by a label into thinking you are somehow special, everyone is ordinary, including Einstein and Dirac. One has to do good work despite this, and those folks show it is possible by assiduous effort.
share|improve this answer
Trouble is you're seeing things from the way you did things, and not how they can be done today using what's available. Have you seen Susskind's QM video lectures for example? Don't you think watching videos while taking notes is more productive? I'm with you and Howard Gardner on "giftedness" – Larry Harson Oct 4 '12 at 2:13
@LarryHarson: I agree that I'm out of date, but it cannot be overemphasized how important it is to read the classics. Dirac's book is timeless, it is lucid, it is brief, it starts with first principles, and its mathematics is self contained. It's path of development is unique and very illuminating, being independent of both Schrodinger and Bohr. Susskind's videos I am sure are excellent, but I have a soft spot for Dirac, who was one of my closest friends throughout adolescence. As for giftedness, it is worst for the "gifted", who are made cocky and incapable of the humility required for study – Ron Maimon Oct 4 '12 at 3:04
I wonder how you think that ''Don't worry about the infinite dimensional nature of the thing, just reduce all the spaces to finite dimensions.'' can be done without some understanding of linear algebra.... – Arnold Neumaier Oct 4 '12 at 15:05
@ArnoldNeumaier: Because I didn't study linear algebra and I read Dirac and had no trouble. – Ron Maimon Oct 4 '12 at 16:09
@kamal: Yes, it's a waste of time, but it was always a waste of time, it was a high-class marker to know Latin (you must be living in some former European colony to have such an education, class-markers were very important under colonialism). High-class markers (King's English, Queen's accent, a Rolex, high-status position) are always extremely time-consuming to acquire (or else they wouldn't work to mark high-classes), and this is why science is always done by low-class people who hate Latin and dress like slobs. The ancient stuff can be useful for Marlowe/Shakespeare, that's about all. – Ron Maimon Oct 27 '12 at 12:43
show 6 more comments
Without having understood matrices and their interpretation as linear mappings (operators) it is very difficult to get a reasonable understanding of quantum mechanics. So you should spend some time on elementary linear algebra. Wikipedia is not bad on this, so you could pick up most from there. (To start with. For basic math, Wikipedia is almost completely reliable, which is not the case for more specialized topics. In case of doubt, cross check with other sources.)
Today, the shortest road to quantum mechanics is probably quantum information theory. For online introductory lecture notes see, e.g.,
The following lecture notes start from scratch (use Wikipedia for the math not explained there):
This one might also be useful:
In quantum information theory, all Hilbert spaces are finite-dimensional, wave functions are just complex vectors, and the Schroedinger equation is just a linear differential equation with constant coefficients. So you also need to learn a little bit about ordinary differential equations and how linear systems behave. Again, this can be picked up from Wikipedia.
In more traditional quantum mechanics, the Schroedinger equation is a partial differential equations, and wave functions are complex function depending on one or more position coordiates. On this level, you need to understand what partial derivatives are and have some knowledge about Fourier transforms. Again, this can be picked up from Wikipedia. Then you might start with
You may also wish to try my online book http://lanl.arxiv.org/abs/0810.1019
It assumes some familiarity with linear algebra and of partial derivatives, but little else. Some basic questions are also answered in my theoretical physics FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html
share|improve this answer
+1 these are nice sources if you get stuck on linear algebra, but I never got stuck on the linear algebra, rather the sticking points were the partial differential equations and the path integral. – Ron Maimon Oct 4 '12 at 18:17
@RonMaimon: kamal doesn't want to understand the path integral by April. And one needs very little from PDE as long as one doesn't want to solve numerically a real problem. Thus if he has no trouble with the linear algebra and with Fourier transforms, he'll have no trouble at all! – Arnold Neumaier Oct 4 '12 at 18:20
He should be more ambitious then--- the speed with which one can self study has increased tenfold in the last decade. – Ron Maimon Oct 4 '12 at 18:21
@RonMaimon What would you suggest being good for me to set as a goal? You seem like a very informed man and I would like to ask for your personal advice. Of course I am also busy with sports, and I'm starting to learn LaTeX, so I'd say I spend 10 hours a week on this. – kamal Oct 27 '12 at 12:18
@kamal: The only goal is to understand what has been done and push it forward, like everyone else tries to do. For this, you can follow a sequence more or less like Dirac/Feynman/Onsager/Landau/Gell-Mann/Anderson/Mandelstam/Polyakov/Parisi/'tHooft/Scherk/Schwarz/Susskind/Witten (with about two dozen more authors I left out, sorry). I gave a simple but flashy thing which can be tackled after understanding basic QM here: physics.stackexchange.com/questions/41780/… (your question). Maybe read Nielson and Chuang, learn complexity classes. – Ron Maimon Oct 27 '12 at 12:31
add comment
You can watch videos from here and lectures from here (first two atleast).
share|improve this answer
add comment
If you want to understand quantum physics, you have to understand Fourier series and Fourier transforms. The best introductory text ever, is the book Who Is Fourier?. Do not be fooled by its cartoonish appearance, this is a serious book as can be demonstrated by the fact that the name on the top of the list of advisers is Yoichiro Nambu who is the 2008 Nobel prize co-winner:
Then I would work to gain an understanding of the Heat Equation. The Schrodinger equation can be described as the quantum version of the heat equation (except, what is diffusing is probability).
Fourier developed the Fourier series in order to solve the question of how heat diffuses in a material. If you understand these things, you can understand quantum mechanics within a few months.
share|improve this answer
For fourier analysis, Koerner is a great source, with both accurate historical material and fascinating applications, including primes in arithmetic progression and an alternate RW proof of Picard's theorem: amazon.com/Fourier-Analysis-T-246-rner/dp/0521389917. I didn't read the cartoon book, but I doubt it has the same depth as Koerner, which is one of the great pedagogical mathematics books, along with Davenport's number theory. These were thankfully used by the mathematics professors I had as an undergraduate, and they were very good folks. – Ron Maimon Oct 4 '12 at 18:19
@RonMaimon Thanks, I will see if I can pick up a copy, it looks pretty cool from the excerpts on amazon – Hal Swyers Oct 4 '12 at 18:30
@Hal Swyers thank you for giving insight into importance of heat equation in understanding the Schrodinger equation. I wish I could get free e-copy of this book "who is fourier" else i will try buying it. – baalkikhaal Oct 27 '12 at 10:33
add comment
Your Answer
|
2ac3726d7488f7d5 | Take the 2-minute tour ×
I am basically a Computer Programmer, but Physics has always fascinated and often baffled me.
I have tried to understand probability density in Quantum Mechanics for many many years. What I understood is that Probability amplitude is the square root of the probability of finding an electron around a nucleus. But the square root of probability does not mean anything in the physical sense. Can any please explain the physical significance of probability amplitude in Quantum Mechanics?
I read the Wikipedia article on probability amplitude many times over. What are those dumbbell shaped images representing?
share|improve this question
add comment
6 Answers
Before trying to understand quantum mechanics proper, I think it's helpful to try to understand the general idea of its statistics and probability.
There are basically two kinds of mathematical systems that can yield a nontrivial formalism for probability. One is the kind we're familiar with from everyday life: each outcome has a probability, and those probabilities directly add up to 100%. A coin has two sides, each with 50% probability. $50\% + 50\% = 100\%$, so there you go.
But there's another system of probability, very different from what you and I are used to. It's a system where each event has an associated vector (or complex number), and the sum of the squared magnitudes of those vectors (complex numbers) is 1.
Quantum mechanics works according to this latter system, and for this reason, the complex numbers associated with events are what we often deal with. The wavefunction of a particle is just the distribution of these complex numbers over space. We have chosen to call these numbers the "probability amplitudes" merely as a matter of convenience.
The system of probability that QM follows is very different from what everyday experience would expect us to believe, and this has many mathematical consequences. It makes interference effects possible, for example, and such is only explainable directly with amplitudes. For this reason, amplitudes are physically significant--they are significant because the mathematical model for probability on the quantum scale is not what you and I are accustomed to.
Edit: regarding "just extra stuff under the hood." Here's a more concrete way of talking about the difference between classical and quantum probability.
Let $A$ and $B$ be mutually exclusive events. In classical probability, they would have associated probabilities $p_A$ and $p_B$, and the total probability of them occurring is obtained through addition, $p_{A \cup B} = p_A + p_B$.
In quantum probability, their amplitudes add instead. This is a key difference. There is a total amplitude $\psi_{A \cup B} = \psi_A + \psi_B$. and the squared magnitude of this amplitude--that is, the probability--is as follows:
$$p_{A \cup B} = |\psi_A + \psi_B|^2 = p_A + p_B + (\psi_A^* \psi_B + \psi_A \psi_B^*)$$
There is an extra term, yielding physically different behavior. This quantifies the effects of interference, and for the right choices of $\psi_A$ and $\psi_B$, you could end up with two events that have nonzero individual probabilities, but the probability of the union is zero! Or higher than the individual probabilities.
share|improve this answer
I'm not too happy with the formulation of "mathematical systems that can yield a nontrivial formalism for probability." Firstly, becuase it sounds like you imply that there are only these two "systems", and secondly, because the quantum framework is still one where "each outcome has a probability, and those probabilities directly add up to 100%." It's just extra dynamics under the hood. – Nikolaj K. Mar 21 '13 at 16:13
There are only these two systems. It is mathematically proven that you couldn't have, say, an amplitude that must be raised to the 4th power. There is only classical probability as we know it and the quantum kind. It's not just extra stuff under the hood, either. See my edit. – Muphrid Mar 21 '13 at 16:28
Whatever is mathematically proven must be w.r.t. some postulates and these are not stated. Also, there are the observable who's probabilities sum to 100% (namely the probability to be in any of a total set of eigenstates) and in this sense it's just probability theory with complex dynamics under the hood. I still don't think this is an inappropriate formulation. – Nikolaj K. Mar 21 '13 at 18:23
add comment
Part of you problem is
"Probability amplitude is the square root of the probability [...]"
The amplitude is a complex number whose amplitude is the probability. That is $\psi^* \psi = P$ where the asterisk superscript means the complex conjugate.1 It may seem a little pedantic to make this distinction because so far the "complex phase" of the amplitudes has no effect on the observables at all: we could always rotate any given amplitude onto the positive real line and then "the square root" would be fine.
But we can't guarantee to be able to rotate more than one amplitude that way at the same time.
More over, there are two ways to combine amplitudes to find probabilities for observation of combined events.
• When the final states are distinguishable you add probabilities: $P_{dis} = P_1 + P_2 = \psi_1^* \psi_1 + \psi_2^* \psi_2$.
• When the final state are indistinguishable,2 you add amplitudes: $\Psi_{1,2} = \psi_1 + \psi_2$, and $P_{ind} = \Psi_{1,2}^*\Psi_{1,2} = \psi_1^*\psi_1 + \psi_1^*\psi_2 + \psi_2^*\psi_1 + \psi_2^*\psi_2$. The terms that mix the amplitudes labeled 1 and 2 are the "interference terms". The interference terms are why we can't ignore the complex nature of the amplitudes and they cause many kinds of quantum weirdness.
1 Here I'm using a notation reminiscent of a Schrödinger-like formulation, but that interpretation is not required. Just accept $\psi$ as a complex number representing the amplitude for some observation.
2 This is not precise, the states need to be "coherent", but you don't want to hear about that today.
share|improve this answer
add comment
In quantum mechanics, the amplitue $\psi$, and not the propability $|\psi|^2$, is the quantity which admits the superposition principle. Notice that the dynamics of the physical system (Schrödinger equation) is formulated in terms of and is linear in the evolution of this object. Observe that working with superposition of $\psi$ also permits complex phases $e^{i\theta}$ to play a role. In the same spirit, the overlap of two systems is computed by investigation of the overlap of the amplitudes.
share|improve this answer
All you say is factually correct, but since the question asked for an explanation in layman's terms I think there needs to be more explanation. – user9886 Mar 21 '13 at 16:21
@user9886: The integrals involving position operators are layman's terms? – Nikolaj K. Mar 21 '13 at 18:11
What is the benefit in using complex phases rather than just sine and cosine? – wrongusername Feb 27 at 2:57
add comment
In quantum mechanics a particle is described by its wave-function $\psi$ (in spatial representation it would for example be $\psi(x,t)$, but I omit the arguments in the following). Observables, like the position $x$ are represented by operators $\hat x$. The mean value of the position of an particle is calculated as $$\int \mathrm{d}x \tilde \psi \hat x \psi.$$
Since $\hat x$ applied to $\psi(x,t)$ just gives the position $x$ times $\psi(x,t)$ we can write the integral as $$\int \mathrm{d}x x \tilde \psi \psi.$$
$\tilde \psi$ is the complex conjugate of $\psi$ and therefore $\tilde \psi \psi=|\psi|^2$.
And finally, since a mean value is usually computed as an integral over the variable times a probability distribution $\rho$ as $$\langle X \rangle_\rho=\int \mathrm{d}X X \rho(X)$$ $|\psi|^2$ can be interpreted as a probability density of finding the particle at some point. E.g. The probability of it being between $a$ and $b$ is $$\int_a^b\mathrm{d}x|\psi|^2$$
So the wave function (which is the solution to the Schrödinger equation that describes the system in question) is a probability amplitude in the sense of the first sentence of the article you linked.
Lastly, the dumbbell shows the area in space where $|\psi|^2$ is larger than some very small number, so basically the regions, where it is not unlikely to find the electron.
share|improve this answer
add comment
I agree with the other answers provided. However, you may find the probability amplitudes more intuitive in the context of the Feynman path integral approach.
Suppose a particle is created at the location $x_1$ at time $0$ and that you want to know the probability for observing it later at some position $x_2$ at time $t$.
Every path $P$ that starts at $x_1$ at time zero and ends at $x_2$ at time $t$ is associated with a (complex) probability amplitude $A_P$. Within the path integral approach, the total amplitude for the process initially described is given by the sum of all these amplitudes:
$A_{\textrm{total}} = \sum_P A_P$
I.e. the sum over all possible paths the particle could take between $x_1$ and $x_2$. These paths interfere coherently, and the probability for observing the particle at $x_2$ at time $t$ is given by the square of the total amplitude:
$\textrm{probability to observe the particle at $x_2$ at time $t$} = |A_{\textrm{total}}|^2 = |\sum_P A_P|^2$
I should note that the Feynman path integral formalism (described above) is actually a special case of a more general approach wherein the amplitudes are associated with processes rather than paths.
Also, a good reference for this is volume 3 of The Feynman Lectures.
share|improve this answer
add comment
Have a look at this simplified statement in describing the behavior of a particle in a potential problem:
In quantum mechanics, a probability amplitude is a complex number whose modulus squared represents a probability or probability density.
This complex number comes from a solution of a quantum mechanical equation with the boundary conditions of the problem, usually a Schroedinger equation, whose solutions are the "wavefunctions" ψ(x), where x represents the coordinates generically for this argument.
The values taken by a normalized wave function ψ at each point x are probability amplitudes, since |ψ(x)|2 gives the probability density at position x.
To get from the complex numbers to a probability distribution, the probability of finding the particle, we have to take the complex square of the wavefunction ψ*ψ .
So the "probability amplitude" is an alternate definition/identification of "wavefunction", coming after the fact, when it was found experimentally that ψ*ψ gives a probability density distribution for the particle in question.
First one computes ψ and then one can evaluate the probability density ψ*ψ, not the other way around. The significance of ψ is that it is the result of a computation.
I agree it is confusing for non physicists who know probabilities from statistics.
share|improve this answer
add comment
Your Answer
|
f7da7b5f15691391 | Psychology Wiki
Measurement in quantum mechanics
34,081pages on
this wiki
Revision as of 17:41, March 6, 2007 by Dr Joe Kiff (Talk | contribs)
The framework of quantum mechanics requires a careful definition of measurement, and a thorough discussion of its practical and philosophical implications.
Formalism of measurement Edit
Measurable quantities ("observables") as operatorsEdit
An observable quantity is represented mathematically by an Hermitian or self adjoint operator. The set of the operator's eigenvalues represent the set of possible outcomes of the measurement. For each eigenvalue there is a corresponding eigenstate (or "eigenvector"), which will be the state of the system after the measurement. Some properties of this representation are
1. The eigenvalues of Hermitian matrices are real. The possible outcomes of a measurement are precisely the eigenvalues of the given observable.
2. A Hermitian matrix can be unitarily diagonalized (See Spectral theorem), generating an orthonormal basis of eigenvectors which spans the state space of the system. In general, the state of a system can be represented as a linear combination of eigenvectors of any Hermitian operator. Physically, this is to say that any state can be expressed as a superposition of the eigenstates of an observable.
Important examples are:
Operators can be noncommuting. In the finite dimensional case, two Hermitian operators commute if they have the same set of {normalized} eigenvectors. Noncommuting observables are said to be incompatible and can not be measured simultaneously. This can be seen via the uncertainty principle.
Eigenstates and projection Edit
Assume the system is prepared in state |\psi\rang. Let {\hat O} be a measurement operator, an observable, with eigenstates |n\rang for n = 1, 2, 3, ... and corresponding eigenvalues O_1, O_2, O_3, .... If the measurement outcome is O_N, the system will then "collapse" to the state |N\rang after measurement.
The case of a continuous spectrum is more involved, since, physically speaking, the basis has uncountably many eigenstates, but the general concept is the same. In the position representation, for instance, the eigenstates can be represented by the set of delta functions, indexed by all possible positions of the particle. In the experimental setting, the resolution of any given measurement is finite, and therefore the continuous space may be divided into discrete segments. Another solution is to approximate any lab experiments by a "box" potential (which bounds the volume in which the particle can be found, and thus ensures a discrete spectrum).
Wavefunction collapse Edit
Given any quantum state which is a superposition of eigenstates at time t
| \psi \rang = c_1 e^{-i E_1 t} | 1 \rang + c_2 e^{-i E_2 t} | 2 \rang + c_3 e^{-i E_3 t} | 3 \rang + \cdots \ \ ,
if we measure, for example, the energy of the system and receive E2
(this result is chosen randomly according to probability given by
\Pr( E_n ) = \frac{ | c_n |^2 }{\sum_k | c_k |^2} ),
then the system's quantum state after the measurement is
| \psi \rang = e^{-i E_2 t} | 2 \rang
so any repeated measurement of energy will yield E2.
Figure 1. The process of wavefunction collapse illustrated.
The process in which a quantum state becomes one of the eigenstates of the operator corresponding to the measured observable is called "collapse", or "wavefunction collapse". The final eigenstate appears randomly with a probability equal to the square of its overlap with the original state. The process of collapse has been studied in many experiments, most famously in the double-slit experiment. The wavefunction collapse raises serious questions of determinism and locality, as demonstrated in the EPR paradox and later in GHZ entanglement.
In the last few decades, major advances have been made toward a theoretical understanding of the collapse process. This new theoretical framework, called quantum decoherence, supersedes previous notions of instantaneous collapse and provides an explanation for the absence of quantum coherence after measurement. While this theory correctly predicts the form and probability distribution of the final eigenstates, it does not explain the randomness inherent in the choice of final state.
There are two major approaches toward the "wavefunction collapse":
1. Accept it as it is. This approach was supported by Niels Bohr and his Copenhagen interpretation which accepts the collapse as one of the elementary properties of nature (at least, for small enough systems). According to this, there is an inherent randomness embedded in nature, and physical observables exist only after they are measured (for example: as long as a particle's speed isn't measured it doesn't have any defined speed).
2. Reject it as a physical process and relate to it only as an illusion. This approach says that there is no collapse at all, and we only think there is. Those who support this approach usually offer another interpretation of quantum mechanics, which avoids the wavefunction collapse.
von Neumann measurement scheme Edit
The von Neumann measurement scheme, an ancestor of quantum decoherence theory, describes measurements by taking into account the measuring apparatus which is also treated as a quantum object. Let the quantum state be in the superposition |\psi\rang = \sum_n c_n |\psi_n\rang , where |\psi_n\rang are eigenstates of the operator that needs to be measured. In order to make the measurement, the measured system described by |\psi\rang needs to interact with the measuring apparatus described by the quantum state |\phi\rang , so that the total wave function before the interaction is |\psi\rang |\phi\rang . After the interaction, the total wave function exhibits the unitary evolution |\psi\rang |\phi\rang \rightarrow \sum_n c_n |\psi_n\rang |\phi_n\rang , where |\phi_n\rang are orthonormal states of the measuring apparatus. The unitary evolution above is referred to as premeasurement. One can also introduce the interaction with the environment |e\rang , so that, after the interaction, the total wave function takes a form \sum_n c_n |\psi_n\rang |\phi_n\rang |e_n \rang, which is related to the phenomenon of decoherence. The above is completely described by the Schrödinger equation and there are not any interpretational problems with this. Now the problematic wavefunction collapse does not need to be understood as a process |\psi\rangle \rightarrow |\psi_n\rang on the level of the measured system, but can also be understood as a process |\phi\rangle \rightarrow |\phi_n\rang on the level of the measuring apparatus, or as a process |e\rangle \rightarrow |e_n\rang on the level of the environment. Studying these processes provides considerable insight into the measurement problem by avoiding the arbitrary boundary between the quantum and classical worlds, though it does not explain the presence of randomness in the choice of final eigenstate. If the set of states \{ |\psi_n\rang\} , \{ |\phi_n\rang\} , or \{ |e_n\rang\} represents a set of states that do not overlap in space, the appearance of collapse can be generated by either the Bohm interpretation or the Everett interpretation which both deny the reality of wavefunction collapse; they both, though, predict the same probabilities for collapses to various states as does the conventional interpretation. The Bohm interpretation is held to be correct only by a small minority of physicists, since there are difficulties with the generalization for use with relativistic quantum field theory. However, there is no proof that the Bohm interpretation is inconsistent with quantum field theory, and work to reconcile the two is ongoing. The Everett interpretation easily accommodates relativistic quantum field theory.
Example Edit
Suppose that we have a particle in a box. If the energy of the particle is measured to be E_N = \frac{N^2\pi^2\hbar^2}{2mL^2} then the corresponding state of the system is |\psi_N\rang = \int | x\rang \lang x|\psi_N\rang dx where \lang x|\psi_N\rang = \lang x|N\rang = \sqrt{ \frac{2}{L} }~{\rm sin}\left(\frac{N \pi x}{L}\right), which is determined by solving the Time-Independent Schrödinger equation for the given potential.
Alternatively, if instead of knowing the energy of the particle the particle's position is determined to be a distance S from the left wall of the box, the corresponding system state is |\psi_S\rang = \int | x\rang \lang x|\psi_S\rang dx where \lang x|\psi_S\rang = \lang x|S\rang = \delta( S - x ) .
These two state functions |\psi_N\rang and |\psi_S\rang are distinct functions (of the position x after we left multiply by the bra state \lang x|), but they are in general not orthogonal to each other:
\lang \psi_S | \psi_N\rang = \lang S | N\rang = \int \lang S | x \rang \lang x | N\rang dx =\int_0^L ~\delta( S - x)~\sqrt{ \frac{2}{L} }~{\rm sin}\left(\frac{N \pi x}{L}\right) dx = \sqrt{ \frac{2}{L} }~{\rm sin}\left(\frac{N \pi S}{L}\right) .
The two systems are therefore distinct; a position measurement is instantaneous whereas a definite value of energy E_N is established only in the limit of an infinitely long observation period.
Completeness of eigenvectors of Hermitian operators guarantees that either system state, being the eigenvector to one measurement operator, can be expressed as a linear combination of eigenvectors of the other measurement operator:
|S\rang = \sum_n | n \rangle \left\langle n | S \right\rangle = \int \sum_n | x \rangle \langle x | n \rangle \left\langle n | S \right\rangle dx = \int | x \rangle \frac{2}{L}~\sum_n {\rm sin}\left(\frac{n \pi x}{L}\right)~{\rm sin}\left(\frac{n \pi S}{L}\right) dx = \int | x \rangle \delta( S - x ) dx , i.e. |S\rang = \int | x \rangle \delta( S - x ) dx
|N\rang = \int ~|s\rang \left\langle s | N \right\rangle ds = \int ~|s\rang \sqrt{ \frac{2}{L} }~{\rm sin}\left(\frac{N \pi s}{L}\right) ds.
The time dependence of the system states is determined by the Time Dependent Schrödinger equation. In the preceding example, with energy eigenvalues E_n, it follows that the time dependent solution is
|\psi( t )\rang = \sum_n |n\rang \lang n|\psi_S\rang ~e^{-i t E_n/\hbar} ,
where t represents the time since the particle's location in space was measured. Consequently
\lang n|\psi( t )\rang = \lang n|\psi_S\rang ~e^{-i t E_n/\hbar} = \sqrt{ \frac{2}{L} }~{\rm sin}\left(\frac{n \pi S}{L}\right)~e^{-i t E_n/\hbar} ~{\not =}~ 0
at least for several distinct energy eigenstates |n\rang , for all values t , and for all 0 < S < L .
The particle state |\psi_S \rang therefore can not have evolved (in the above technical sense) into state |\psi_N \rang (which is orthogonal to all energy eigenstates, except itself), for any duration t . While this conclusion may be characterized accordingly instead as "the wave function of the particle having been projected, or having collapsed into" the energy eigenstate |\psi_N\rang , it is perhaps worth emphasizing that any definite value of energy E_N can be established only in the limit of a long-lasting trial and never for any finite value of time.
Optimal quantum measurement Edit
What is the optimal quantum measurement to distinguish mixed states from a given ensemble? This is a natural question of which the solution is well understood, and is given by a semidefinite programming.
More specifically, suppose a mixed state \rho_i is drawn from the ensemble with probability p_i, we wish to find a POVM measurement \{ \Pi_i \} so that \sum_i\ p_i \mathrm{tr}(\Pi_i\ \rho_i) is maximized. This is clearly a semidefinit programming:
\mathrm{max}\ \sum_i\ p_i \mathrm{tr}(\Pi_i\ \rho_i)\quad\textrm{s.t.}
\ \Pi_i\ge0, \ \sum_i \Pi_i=I.
Interestingly, the dual problem has a nice description:
\mathrm{min}\ tr(X) \quad \textrm{s.t.}\ X-p_i\rho_i \ge0.
Let \hat\Pi_i and \hat X be the solutions of the primal and the dual, we have
\hat\Pi_i\cdot(\hat X - p_i\rho_i) = 0.
From this one can conclude that if all \rho_i are pure states, then \hat\Pi_i must also be of rank 1. Furthermore, if \rho_i's are in addition independent, then the optimal measure is a von Neumann measurement.
Philosophical problems of quantum measurements Edit
What physical interaction constitutes a measurement? Edit
Until the advent of quantum decoherence theory in the late 20th century, a major conceptual problem of quantum mechanics and especially the Copenhagen interpretation was the lack of a distinctive criterion for a given physical interaction to qualify as "a measurement" and cause a wavefunction to collapse. This best illustrated by the Schrödinger's cat paradox.
Major philosophical and metaphysical questions surround this issue:
• The concept of weak measurements.
• Macroscopic systems (such as chairs or cats) do not exhibit counterintuitive quantum properties, which can only be observed in microscopic particles such as electrons or photons. This invites the question of when a system is "big enough" to behave classically and not quantum mechanically?
Quantum decoherence theory has successfully addressed other questions that previously haunted quantum measurement theory:
Answer: No. Coupling an isolated quantum system to another quantum system with many degrees of freedom generically transfers the coherence of the first system into mutual coherence of the two systems. The initially isolated quantum system then appears to "collapse." Interpreting the second system as a measurement apparatus, as in the von Neumann scheme, shows that no consciousness or self-awareness is necessary for collapse of the first system.
• What interactions are strong enough to constitute a measurement?
This question is quantitatively answered by decoherence theory, given a model for the measurement apparatus. The scaling of the measurement effects with the system/apparatus interaction strength usually only weakly depends on the choice of a model for the apparatus, so one can give a generic description of the strength of a measurement induced by a given interaction.
Does measurement actually determine the state? Edit
The question of whether a measurement actually determines the state, is deeply related to the Wavefunction collapse.
Most versions of the Copenhagen interpretation answer this question with an unqualified "yes".
See also:
The quantum entanglement problem Edit
See EPR paradox.
See also Edit
External links Edit
"Variation on the similar two-pin-hole "which-way" experiment". (reported in New Scientist; July 24), Reprint at
You can help the Psychology Wiki by citing appropriate references.
Please see the relevant discussion on the talk page.
Further readingEdit
de:Quantenmechanische Messung
fr:Problème de la mesure quantique
ru:Измерение (квантовая механика)
Advertisement | Your ad here
Around Wikia's network
Random Wiki |
f1831808b022481b | Take the 2-minute tour ×
I had no problem appliying the Neothers theorem for translations to the non-relativistic Schrödinger equation
$\mathrm i\hbar\frac{\partial}{\partial t}\psi(\mathbf{r},t) \;=\; \left(- \frac{\hbar^2}{2m}\Delta + V(\mathbf{r},t)\right)\psi(\mathbf{r},t)$
$\Longrightarrow\ \pi=\frac{\partial \mathcal{L}}{\partial \dot{\psi}} \propto \psi^{*}$
$T[\psi]\propto \mathbf{\nabla} \psi$
$\Longrightarrow\ I_{\ \psi,{\ T_\text{(translation)}}}=\int\text d^3x\ \pi\ T[\psi]\propto \int\text d^3x\ \psi^{*} \mathbf{\nabla} \psi = \langle P \rangle_\psi$
But I actually wonder why that works out, given that the Schrödigner equation is not invariant under Galileian transformations.
It might well be that the Schrödinger group, which I'm not familiar with, is close enough to the Galileian group, that the fourth line $T[\psi]\propto \mathbf{\nabla} \psi$ is just the same and that's the reason. I'd like to know if the evaluation of the infinitesimal transformation is the only point at which one has to know the transformations one is actually dealing with. Is my guess right?
Also, regrding the "trick" to establish Galilei-invariance after the conventional transformation via multiplication of the Schrödinger field by a phase (a phase which, among other things, is mass dependend):
Some authors change $\psi(r,t)$ to $\psi(r',t')=\psi(r-vt,t)$, like here in the paper referenced on wikipedia (there is also a two year old version of it online (google)), but other authors, like the writers of the page in the first link, also transform $p$ to $p+mv$ in $\phi$ (which doesn't change the fact that they still have to add a phase). This is all before the phase multiplication. So what is the "right way" here? If I do this transformations involving a multiplication of the phase, do I only transform the actual arguments of the scalar field $\psi(r,t)$ or do I also transform the objects like $p$, which classically transform too, but are really just parameters (and the Eigenvalues) or the field - and not arguments?
share|improve this question
The Schrodinger equation changes form under Galilean transformations, but it is invariant in a quantum sense under these, since you cancel out the change with a phase factor. I wonder why you are confused, because translations and Galilean transformations are both mathematically and logically independent--- you can make a translation symmetry ignoring galilean symmetry, like in a crystal, where you have discrete translations and no boosts, or in He4, where you have continuous translation symmerty but again no boosts. – Ron Maimon Aug 10 '12 at 19:26
@RonMaimon: You're right, I just did the computation for the translations (because that's easy) and here I was just assuming there is some conserved quantity for boosts as well. Is that not the case? And furthermore, are there interesting conserved quantities via Noether due to the new symmetry group (the Schrödinger symmetries)? – NikolajK Aug 11 '12 at 0:32
Yes, there are further non-obvious conserved quantities, the location of the center of mass. This shows up as phase relations in scattering, and in separation theorems, like the reduced-mass/total-mass decomposition for the two-body problem. The center of mass law is independent of the conservation of momentum, although this is counterintuitive. Is this your question? I will answer this way, but it's not clear from what you ask. – Ron Maimon Aug 11 '12 at 3:57
1 Answer 1
The conserved quantity corresponding to translation is the generator of translations. This is P, and you can see this because $e^{iPa}$ acting on a state $|x\rangle$ produces $|x+a\rangle$.
By P-X symmetry, the operator $X$ generates translations in $P$, so that $e^{iXa}$ takes $|p\rangle$ to $|p-a\rangle$ (the minus sign is dictated by the orientation of the phase space, but you can also explicitly see it from the usual form of the X,P operators). So the naive generator of boosts is
$$ mvX$$
Because this shifts the momentum by $mv$. But this is nonsense, because it doesn't commute with H! So it is not a symmetry. But the reason is because you need a time-dependent phase factor to fix the phase space. Once you do this, the correct conserved quantity B is
$$ vB = v(mX - Pt)$$
Which shifts the momentum eigenstates by $mv$ and multiplies by an additional phase. The quantity $mX - Pt$ is the additional conservation law for boost invariance, and it is the location of the center of mass. For several particles, the generator of boosts is:
$$ {\sum_i m_i X_i - Pt} $$
which shifts each of the momenta by $m_i v$, and corrects by a total phase.
The Hamiltonian
$$ {p^2\over 2} + {p^4\over 4} + V(x) $$
Is an example of an H that is not Boost invariant but is translation invariant. Motion in this H doesn't conserve center of mass, but conserves momentum. Another example is a crystal, where the p-dependence goes like $1-\cos(p)$, so again, you have translation invariance (discrete translation invariance--- p is periodic), but no boost invariance. In the crystal case, boost invariance is an accidental symmetry at low p.
To see how boosts work in the Lagrangian picture, look here: Galilean invariance of classical lagrangian .
share|improve this answer
Your Answer
|
da7ac2dc9981c6b8 | A wave just before breaking at Manhattan Beach, California.
A wave is a disturbance that propagates through space in a regular pattern, often involving the transfer of energy. When thinking about waves, a person tends to recall ocean waves or ripples on a pond. Scientists have found that sound and light (electromagnetic radiation) can also be described in terms of wave motion. Sound involves mechanical waves that propagate as vibrations through a medium, such as a solid, liquid, or gas. By contrast, light can travel through a vacuum, that is, without a medium. In addition, the movement of subatomic particles also has wavelike properties. Thus a wide range of physical phenomena can be understood in terms of wave motion.
Waves can be represented by simple harmonic motion.
Examples of waves include:
• Ocean surface waves, which are perturbations that propagate through water.
• Sound waves, which are mechanical waves that propagate through air, liquids, or solids. In common usage, sound waves have frequencies that are detectable by the human ear. Scientists, however, include other similar vibratory phenomena in the general category of "sound," even when they lie outside the range of human hearing. It should be noted that although these waves travel and transfer energy from one point to another, there is little or no permanent displacement of the particles of the medium. Rather, the particles of the medium simply oscillate around fixed positions.
• Electromagnetic radiation, which is constituted of radio waves (including microwaves), infrared rays, visible light, ultraviolet rays, X rays, and gamma rays. The various forms of electromagnetic radiation differ in their frequencies (and wavelengths), but they share other properties. They can propagate through a vacuum, traveling at a speed of approximately three hundred thousand kilometers/second. According to the quantum mechanical model, these forms of radiation exhibit the properties of particles as well as waves. The "particles" are thought to consist of packets of energy known as photons.
Moreover, Albert Einstein's theory of General Relativity predicts the existence of gravitational waves, which are fluctuations in the gravitational field. These waves, however, have yet to be observed empirically.
Surface waves in water
Periodic waves are characterized by crests (highs) and troughs (lows). If the waves remain in one place, such as the vibrations of a violin string, they are called standing waves. If the waves are moving, they are called traveling waves.
Waves are often classified as either longitudinal or transverse. Transverse waves are those with vibrations perpendicular to the direction of propagation of the wave; examples include waves on a string and electromagnetic waves. Longitudinal waves are those with vibrations parallel to the direction of propagation of the wave. Most sound waves are longitudinal waves, where the air is both compressed and rarified in the direction of movement of the traveling wave.
A = At deep water.
B = At shallow water. The circular movement of a surface particle becomes elliptical with decreasing depth.
1 = Progression of wave
2 = Crest
3 = Trough
Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves, and the points on the surface follow orbital paths. Thus, when an object bobs up and down on ripples in a pond, it experiences an orbital trajectory.
All waves exhibit certain types of behavior depending on the situation, as follows:
• Reflection – the change of direction of waves when they hit a reflective surface.
• Refraction – the change of direction of waves when they enter a new medium.
• Interference – the superposition of two (or more) waves that contact each other, producing a new wave pattern.
• Diffraction – the bending, spreading, and interference of waves when they pass by an obstruction or go through a narrow gap.
• Dispersion – the splitting up of waves that have several components of different frequencies.
• Rectilinear propagation – the movement of waves in straight lines.
Thus, by understanding the concept and behavior of waves, we can explain the properties of sound, electromagnetic radiation, subatomic particles, and so forth.
A wave is said to be "polarized" when it oscillates in only one direction. The polarization of a transverse wave (such as light) indicates that the oscillations occur in a single plane perpendicular to the direction of travel. Longitudinal waves, such as sound waves, do not exhibit polarization, because for these waves the direction of oscillation is along the direction of travel. A wave can be polarized by using a device called a "polarizing filter."
Parameters of a wave
A wave can be described mathematically using a series of parameters including its amplitude, wavelength, wavenumber, period, and frequency.
The amplitude of a wave (commonly denoted as A or another letter) is a measure of the maximum disturbance in the medium during one wave cycle. In the illustration to the right, this is the maximum vertical distance between the baseline and the wave. The units for measuring amplitude depend on the type of wave. Waves on a string have an amplitude expressed in terms of distance (meters); sound waves, as pressure (in pascals); and electromagnetic waves, as the amplitude of the electric field (in volts/meter). The amplitude may be constant, in which case the wave is called a continuous wave (c.w.), or it may vary with time or position. The form of variation of amplitude is called the envelope of the wave.
The wavelength (denoted as \lambda) is the distance between two successive crests (or troughs). It is generally measured on the metric scale (in meters, centimeters, and so on). For the optical part of the electromagnetic spectrum, wavelength is commonly measured in nanometers (one nanometer equals a billionth of a meter).
A wavenumber, k, can be associated with the wavelength by the relation
k = \frac{2 \pi}{\lambda}.
The period, T, of a wave is the time taken for a wave oscillation to go through one complete cycle (one crest and one trough). The frequency f (also denoted as \nu) is the number of periods per unit time. Frequency is usually measured in hertz (Hz), which corresponds to the number of cycles per second. The frequency and period of a wave are reciprocals of each other. Thus their mathematical relationship is:
One complete cycle of a wave can be said to have an "angular displacement" of 2\pi radians—in other words, one cycle is completed and another is about to begin. Thus there is another parameter called angular frequency (or angular speed), \omega. It is measured as the number of radians per unit time (radians per second) at a fixed position. Angular frequency is related to the frequency by the equation:
\omega = 2 \pi f = \frac{2 \pi}{T}
There are two types of velocity associated with a wave: phase velocity and group velocity. Phase velocity gives the rate at which the wave propagates. It is calculated by the equation:
v_p = \frac{\omega}{k} = \lambda f
Group velocity gives the rate at which information can be transmitted by the wave. In scientific terms, it is the velocity at which variations in the wave's amplitude propagate through space. Group velocity is given by the equation:
v_g = \frac{\partial \omega}{\partial k}
Interference based on phases of waves
Interference pattern produced with a Michelson interferometer. Bright bands are the result of constructive interference, and the dark bands are the result of destructive interference.
Consider two waves that have the same wavelength (or frequency) and amplitude A, and they are superimposed on each other such that they are "in phase"—that is, the crests and troughs of one wave overlap the crests and troughs of the other, respectively. Then the resultant waveform will have an amplitude of 2 A. This is known as constructive interference.
On the other hand, if the same two waves are 180° out of phase when superimposed (that is, the crests of one wave exactly overlap the troughs of the other), the resultant waveform will have an amplitude of zero. This is known as destructive interference.
Constructive and destructive interference are illustrated below.
Interference of two waves.png
wave 1
wave 2
Two waves in phase Two waves 180° out
of phase
Transmission medium
The medium that carries a wave is called the transmission medium. It can be classified into one or more of the following categories:
• A linear medium, if the amplitudes of different waves at any particular point in the medium can be added.
• A bounded medium, if the medium is finite in extent; otherwise, the medium is called an unbounded medium.
• A uniform medium, if the physical properties of the medium are the same in different parts of the medium.
• An isotropic medium, if the physical properties of the medium are the same in different directions.
Mathematics of specific cases
Propagation through strings
The speed (v) of a wave traveling along a string is directly proportional to the square root of the tension (T) over the linear density (ρ):
Traveling waves
Traveling waves have a disturbance (amplitude u) that varies with both time (t) and distance (z). This can be expressed mathematically as:
u = A(z,t) \cos (\omega t - kz + \phi)\,
where A(z,t) is the amplitude envelope of the wave, k is the wave number, and \phi is the phase of the wave.
The wave equation
The wave equation is a differential equation that describes how a harmonic wave changes over time. The equation has slightly different forms, depending on how the wave is transmitted and the medium it is traveling through. For a one-dimensional wave traveling down a rope along the x-axis with velocity (v) and amplitude (u) (which generally depends on both x and t), the wave equation is:
\frac{1}{v^2}\frac{\partial^2 u}{\partial t^2}=\frac{\partial^2 u}{\partial x^2}. \
In three dimensions, the equation becomes:
\frac{1}{v^2}\frac{\partial^2 u}{\partial t^2} = \frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}+\frac{\partial^2 u}{\partial z^2}.
It should be noted that the velocity (v) depends on both the type of wave and the medium through which it is being transmitted.
A general solution for the wave equation in one dimension was given by French physicist-mathematician Jean Le Rond d'Alembert (1717-1783). It is
u(x,t)=F(x-vt)+G(x+vt). \
This can be viewed as two pulses travelling down a taut rope in opposite directions; F in the +x direction, and G in the -x direction. If we substitute for x above, replacing it with directions x, y, z, we then can describe a wave propagating in three dimensions.
In quantum mechanics, the Schrödinger equation describes the wavelike behavior of subatomic particles. Solutions of this equation are wave functions that can be used to describe the probability density of a particle. Quantum mechanics also describes particle properties that other waves (such as light and sound) have on the atomic and subatomic scales.
See also
Further reading
• French, A.P. (1971). Vibrations and Waves (M.I.T. Introductory physics series). Nelson Thornes. ISBN 074874479.
External links
All links retrieved August 10, 2013.
|
b61f015b4cc76c39 | The Uncertainty Principle
First published Mon Oct 8, 2001; substantive revision Mon Jul 3, 2006
Quantum mechanics is generally regarded as the physical theory that is our best candidate for a fundamental and universal description of the physical world. The conceptual framework employed by this theory differs drastically from that of classical physics. Indeed, the transition from classical to quantum physics marks a genuine revolution in our understanding of the physical world.
One striking aspect of the difference between classical and quantum physics is that whereas classical mechanics presupposes that exact simultaneous values can be assigned to all physical quantities, quantum mechanics denies this possibility, the prime example being the position and momentum of a particle. According to quantum mechanics, the more precisely the position (momentum) of a particle is given, the less precisely can one say what its momentum (position) is. This is (a simplistic and preliminary formulation of) the quantum mechanical uncertainty principle for position and momentum. The uncertainty principle played an important role in many discussions on the philosophical implications of quantum mechanics, in particular in discussions on the consistency of the so-called Copenhagen interpretation, the interpretation endorsed by the founding fathers Heisenberg and Bohr.
This should not suggest that the uncertainty principle is the only aspect of the conceptual difference between classical and quantum physics: the implications of quantum mechanics for notions as (non)-locality, entanglement and identity play no less havoc with classical intuitions.
1. Introduction
The uncertainty principle is certainly one of the most famous and important aspects of quantum mechanics. It has often been regarded as the most distinctive feature in which quantum mechanics differs from classical theories of the physical world. Roughly speaking, the uncertainty principle (for position and momentum) states that one cannot assign exact simultaneous values to the position and momentum of a physical system. Rather, these quantities can only be determined with some characteristic ‘uncertainties’ that cannot become arbitrarily small simultaneously. But what is the exact meaning of this principle, and indeed, is it really a principle of quantum mechanics? (In his original work, Heisenberg only speaks of uncertainty relations.) And, in particular, what does it mean to say that a quantity is determined only up to some uncertainty? These are the main questions we will explore in the following, focusing on the views of Heisenberg and Bohr.
The notion of ‘uncertainty’ occurs in several different meanings in the physical literature. It may refer to a lack of knowledge of a quantity by an observer, or to the experimental inaccuracy with which a quantity is measured, or to some ambiguity in the definition of a quantity, or to a statistical spread in an ensemble of similary prepared systems. Also, several different names are used for such uncertainties: inaccuracy, spread, imprecision, indefiniteness, indeterminateness, indeterminacy, latitude, etc. As we shall see, even Heisenberg and Bohr did not decide on a single terminology for quantum mechanical uncertainties. Forestalling a discussion about which name is the most appropriate one in quantum mechanics, we use the name ‘uncertainty principle’ simply because it is the most common one in the literature.
2. Heisenberg
2.1 Heisenberg's road to the uncertainty relations
Why was this issue of the Anschaulichkeit of quantum mechanics such a prominent concern to Heisenberg? This question has already been considered by a number of commentators (Jammer, 1977; Miller 1982; de Regt, 1997; Beller, 1999). For the answer, it turns out, we must go back a little in time. In 1925 Heisenberg had developed the first coherent mathematical formalism for quantum theory (Heisenberg, 1925). His leading idea was that only those quantities that are in principle observable should play a role in the theory, and that all attempts to form a picture of what goes on inside the atom should be avoided. In atomic physics the observational data were obtained from spectroscopy and associated with atomic transitions. Thus, Heisenberg was led to consider the ‘transition quantities’ as the basic ingredients of the theory. Max Born, later that year, realized that the transition quantities obeyed the rules of matrix calculus, a branch of mathematics that was not so well-known then as it is now. In a famous series of papers Heisenberg, Born and Jordan developed this idea into the matrix mechanics version of quantum theory.
Formally, matrix mechanics remains close to classical mechanics. The central idea is that all physical quantities must be represented by infinite self-adjoint matrices (later identified with operators on a Hilbert space). It is postulated that the matrices q and p representing the canonical position and momentum variables of a particle satisfy the so-called canonical commutation rule
qppq = i (1)
where ℏ = h/2π, h denotes Planck's constant, and boldface type is used to represent matrices. The new theory scored spectacular empirical success by encompassing nearly all spectroscopic data known at the time, especially after the concept of the electron spin was included in the theoretical framework.
It came as a big surprise, therefore, when one year later, Erwin Schrödinger presented an alternative theory, that became known as wave mechanics. Schrödinger assumed that an electron in an atom could be represented as an oscillating charge cloud, evolving continuously in space and time according to a wave equation. The discrete frequencies in the atomic spectra were not due to discontinuous transitions (quantum jumps) as in matrix mechanics, but to a resonance phenomenon. Schrödinger also showed that the two theories were equivalent.[2]
Even so, the two approaches differed greatly in interpretation and spirit. Whereas Heisenberg eschewed the use of visualizable pictures, and accepted discontinuous transitions as a primitive notion, Schrödinger claimed as an advantage of his theory that it was anschaulich. In Schrödinger's vocabulary, this meant that the theory represented the observational data by means of continuously evolving causal processes in space and time. He considered this condition of Anschaulichkeit to be an essential requirement on any acceptable physical theory. Schrödinger was not alone in appreciating this aspect of his theory. Many other leading physicists were attracted to wave mechanics for the same reason. For a while, in 1926, before it emerged that wave mechanics had serious problems of its own, Schrödinger's approach seemed to gather more support in the physics community than matrix mechanics.
Understandably, Heisenberg was unhappy about this development. In a letter of 8 June 1926 to Pauli he confessed that "The more I think about the physical part of Schrödinger's theory, the more disgusting I find it", and: "What Schrödinger writes about the Anschaulichkeit of his theory, … I consider Mist (Pauli, 1979, p. 328)". Again, this last German term is translated differently by various commentators: as "junk" (Miller, 1982) "rubbish" (Beller 1999) "crap" (Cassidy, 1992), and perhaps more literally, as "bullshit" (de Regt, 1997). Nevertheless, in published writings, Heisenberg voiced a more balanced opinion. In a paper in Die Naturwissenschaften (1926) he summarized the peculiar situation that the simultaneous development of two competing theories had brought about. Although he argued that Schrödinger's interpretation was untenable, he admitted that matrix mechanics did not provide the Anschaulichkeit which made wave mechanics so attractive. He concluded: "to obtain a contradiction-free anschaulich interpretation, we still lack some essential feature in our image of the structure of matter". The purpose of his 1927 paper was to provide exactly this lacking feature.
2.2 Heisenberg's argument
Let us now look at the argument that led Heisenberg to his uncertainty relations. He started by redefining the notion of Anschaulichkeit. Whereas Schrödinger associated this term with the provision of a causal space-time picture of the phenomena, Heisenberg, by contrast, declared:
We believe we have gained anschaulich understanding of a physical theory, if in all simple cases, we can grasp the experimental consequences qualitatively and see that the theory does not lead to any contradictions. Heisenberg, 1927, p. 172)
His goal was, of course, to show that, in this new sense of the word, matrix mechanics could lay the same claim to Anschaulichkeit as wave mechanics.
To do this, he adopted an operational assumption: terms like ‘the position of a particle’ have meaning only if one specifies a suitable experiment by which ‘the position of a particle’ can be measured. We will call this assumption the ‘measurement=meaning principle’. In general, there is no lack of such experiments, even in the domain of atomic physics. However, experiments are never completely accurate. We should be prepared to accept, therefore, that in general the meaning of these quantities is also determined only up to some characteristic inaccuracy.
As an example, he considered the measurement of the position of an electron by a microscope. The accuracy of such a measurement is limited by the wave length of the light illuminating the electron. Thus, it is possible, in principle, to make such a position measurement as accurate as one wishes, by using light of a very short wave length, e.g., γ-rays. But for γ-rays, the Compton effect cannot be ignored: the interaction of the electron and the illuminating light should then be considered as a collision of at least one photon with the electron. In such a collision, the electron suffers a recoil which disturbs its momentum. Moreover, the shorter the wave length, the larger is this change in momentum. Thus, at the moment when the position of the particle is accurately known, Heisenberg argued, its momentum cannot be accurately known:
At the instant of time when the position is determined, that is, at the instant when the photon is scattered by the electron, the electron undergoes a discontinuous change in momentum. This change is the greater the smaller the wavelength of the light employed, i.e., the more exact the determination of the position. At the instant at which the position of the electron is known, its momentum therefore can be known only up to magnitudes which correspond to that discontinuous change; thus, the more precisely the position is determined, the less precisely the momentum is known, and conversely (Heisenberg, 1927, p. 174-5).
This is the first formulation of the uncertainty principle. In its present form it is an epistemological principle, since it limits what we can know about the electron. From "elementary formulae of the Compton effect" Heisenberg estimated the ‘imprecisions’ to be of the order
δpδqh (2)
He continued: “In this circumstance we see the direct anschaulich content of the relation qp − pq = iℏ.”
He went on to consider other experiments, designed to measure other physical quantities and obtained analogous relations for time and energy:
δt δEh (3)
and action J and angle w
δw δJh (4)
which he saw as corresponding to the "well-known" relations
tEEt = iℏ or wJ − Jw = i (5)
However, these generalisations are not as straightforward as Heisenberg suggested. In particular, the status of the time variable in his several illustrations of relation (3) is not at all clear (Hilgevoord 2005). See also on Section 2.5.
Heisenberg summarized his findings in a general conclusion: all concepts used in classical mechanics are also well-defined in the realm of atomic processes. But, as a pure fact of experience ("rein erfahrungsgemäß"), experiments that serve to provide such a definition for one quantity are subject to particular indeterminacies, obeying relations (2)-(4) which prohibit them from providing a simultaneous definition of two canonically conjugate quantities. Note that in this formulation the emphasis has slightly shifted: he now speaks of a limit on the definition of concepts, i.e. not merely on what we can know, but what we can meaningfully say about a particle. Of course, this stronger formulation follows by application of the above measurement=meaning principle: if there are, as Heisenberg claims, no experiments that allow a simultaneous precise measurement of two conjugate quantities, then these quantities are also not simultaneously well-defined.
Heisenberg's paper has an interesting "Addition in proof" mentioning critical remarks by Bohr, who saw the paper only after it had been sent to the publisher. Among other things, Bohr pointed out that in the microscope experiment it is not the change of the momentum of the electron that is important, but rather the circumstance that this change cannot be precisely determined in the same experiment. An improved version of the argument, responding to this objection, is given in Heisenberg's Chicago lectures of 1930.
Here (Heisenberg, 1930, p. 16), it is assumed that the electron is illuminated by light of wavelength λ and that the scattered light enters a microscope with aperture angle ε. According to the laws of classical optics, the accuracy of the microscope depends on both the wave length and the aperture angle; Abbe's criterium for its ‘resolving power’, i.e. the size of the smallest discernable details, gives
δq ∼ λ/sin ε (6)
On the other hand, the direction of a scattered photon, when it enters the microscope, is unknown within the angle ε, rendering the momentum change of the electron uncertain by an amount
δph sin ε/λ (7)
leading again to the result (2).
Let us now analyse Heisenberg's argument in more detail. First note that, even in this improved version, Heisenberg's argument is incomplete. According to Heisenberg's ‘measurement=meaning principle’, one must also specify, in the given context, what the meaning is of the phrase ‘momentum of the electron’, in order to make sense of the claim that this momentum is changed by the position measurement. A solution to this problem can again be found in the Chicago lectures (Heisenberg, 1930, p. 15). Here, he assumes that initially the momentum of the electron is precisely known, e.g. it has been measured in a previous experiment with an inaccuracy δpi, which may be arbitrarily small. Then, its position is measured with inaccuracy δq, and after this, its final momentum is measured with an inaccuracy δpf. All three measurements can be performed with arbitrary precision. Thus, the three quantities δpi, δq, and δpf can be made as small as one wishes. If we assume further that the initial momentum has not changed until the position measurement, we can speak of a definite momentum until the time of the position measurement. Moreover we can give operational meaning to the idea that the momentum is changed during the position measurement: the outcome of the second momentum measurement (say pf) will generally differ from the initial value pi. In fact, one can also show that this change is discontinuous, by varying the time between the three measurements.
Let us now try to see, adopting this more elaborate set-up, if we can complete Heisenberg's argument. We have now been able to give empirical meaning to the ‘change of momentum’ of the electron, pf − pi. Heisenberg's argument claims that the order of magnitude of this change is at least inversely proportional to the inaccuracy of the position measurement:
| pfpi | δqh (8)
However, can we now draw the conclusion that the momentum is only imprecisely defined? Certainly not. Before the position measurement, its value was pi, after the measurement it is pf. One might, perhaps, claim that the value at the very instant of the position measurement is not yet defined, but we could simply settle this by an assignment by convention, e.g., we might assign the mean value (pi + pf)/2 to the momentum at this instant. But then, the momentum is precisely determined at all instants, and Heisenberg's formulation of the uncertainty principle no longer follows. The above attempt of completing Heisenberg's argument thus overshoots its mark.
A solution to this problem can again be found in the Chicago Lectures. Heisenberg admits that position and momentum can be known exactly. He writes:
If the velocity of the electron is at first known, and the position then exactly measured, the position of the electron for times previous to the position measurement may be calculated. For these past times, δpδq is smaller than the usual bound. (Heisenberg 1930, p. 15)
Indeed, Heisenberg says: "the uncertainty relation does not hold for the past".
Apparently, when Heisenberg refers to the uncertainty or imprecision of a quantity, he means that the value of this quantity cannot be given beforehand. In the sequence of measurements we have considered above, the uncertainty in the momentum after the measurement of position has occurred, refers to the idea that the value of the momentum is not fixed just before the final momentum measurement takes place. Once this measurement is performed, and reveals a value pf, the uncertainty relation no longer holds; these values then belong to the past. Clearly, then, Heisenberg is concerned with unpredictability: the point is not that the momentum of a particle changes, due to a position measurement, but rather that it changes by an unpredictable amount. It is, however always possible to measure, and hence define, the size of this change in a subsequent measurement of the final momentum with arbitrary precision.
Although Heisenberg admits that we can consistently attribute values of momentum and position to an electron in the past, he sees little merit in such talk. He points out that these values can never be used as initial conditions in a prediction about the future behavior of the electron, or subjected to experimental verification. Whether or not we grant them physical reality is, as he puts it, a matter of personal taste. Heisenberg's own taste is, of course, to deny their physical reality. For example, he writes, "I believe that one can formulate the emergence of the classical ‘path’ of a particle pregnantly as follows: the ‘path’ comes into being only because we observe it" (Heisenberg, 1927, p. 185). Apparently, in his view, a measurement does not only serve to give meaning to a quantity, it creates a particular value for this quantity. This may be called the ‘measurement=creation’ principle. It is an ontological principle, for it states what is physically real.
This then leads to the following picture. First we measure the momentum of the electron very accurately. By ‘measurement= meaning’, this entails that the term "the momentum of the particle" is now well-defined. Moreover, by the ‘measurement=creation’ principle, we may say that this momentum is physically real. Next, the position is measured with inaccuracy δq. At this instant, the position of the particle becomes well-defined and, again, one can regard this as a physically real attribute of the particle. However, the momentum has now changed by an amount that is unpredictable by an order of magnitude | pf − pi |hq. The meaning and validity of this claim can be verified by a subsequent momentum measurement.
The question is then what status we shall assign to the momentum of the electron just before its final measurement. Is it real? According to Heisenberg it is not. Before the final measurement, the best we can attribute to the electron is some unsharp, or fuzzy momentum. These terms are meant here in an ontological sense, characterizing a real attribute of the electron.
2.3 The interpretation of Heisenberg's relation
The relations Heisenberg had proposed were soon considered to be a cornerstone of the Copenhagen interpretation of quantum mechanics. Just a few months later, Kennard (1927) already called them the "essential core" of the new theory. Taken together with Heisenberg's contention that they provided the intuitive content of the theory and their prominent role in later discussions on the Copenhagen interpretation, a dominant view emerged in which the uncertainty relations were regarded as a fundamental principle of the theory.
The interpretation of these relations has often been debated. Do Heisenberg's relations express restrictions on the experiments we can perform on quantum systems, and, therefore, restrictions on the information we can gather about such systems; or do they express restrictions on the meaning of the concepts we use to describe quantum systems? Or else, are they restrictions of an ontological nature, i.e., do they assert that a quantum system simply does not possess a definite value for its position and momentum at the same time? The difference between these interpretations is partly reflected in the various names by which the relations are known, e.g. as ‘inaccuracy relations’, or: ‘uncertainty’, ‘indeterminacy’ or ‘unsharpness relations’. The debate between these different views has been addressed by many authors, but it has never been settled completely. Let it suffice here to make only two general observations.
First, it is clear that in Heisenberg's own view all the above questions stand or fall together. Indeed, we have seen that he adopted an operational "measurement=meaning" principle according to which the meaningfulness of a physical quantity was equivalent to the existence of an experiment purporting to measure that quantity. Similarly, his "measurement=creation" principle allowed him to attribute physical reality to such quantities. Hence, Heisenberg's discussions moved rather freely and quickly from talk about experimental inaccuracies to epistemological or ontological issues and back again.
However, ontological questions seemed to be of somewhat less interest to him. For example, there is a passage (Heisenberg, 1927, p. 197), where he discusses the idea that, behind our observational data, there might still exist a hidden reality in which quantum systems have definite values for position and momentum, unaffected by the uncertainty relations. He emphatically dismisses this conception as an unfruitful and meaningless speculation, because, as he says, the aim of physics is only to describe observable data. Similarly, in the Chicago Lectures (Heisenberg 1930, p. 11), he warns against the fact that the human language permits the utterance of statements which have no empirical content at all, but nevertheless produce a picture in our imagination. He notes, "One should be especially careful in using the words ‘reality’, ‘actually’, etc., since these words very often lead to statements of the type just mentioned." So, Heisenberg also endorsed an interpretation of his relations as rejecting a reality in which particles have simultaneous definite values for position and momentum.
The second observation is that although for Heisenberg experimental, informational, epistemological and ontological formulations of his relations were, so to say, just different sides of the same coin, this is not so for those who do not share his operational principles or his view on the task of physics. Alternative points of view, in which e.g. the ontological reading of the uncertainty relations is denied, are therefore still viable. The statement, often found in the literature of the thirties, that Heisenberg had proved the impossibility of associating a definite position and momentum to a particle is certainly wrong. But the precise meaning one can coherently attach to Heisenberg's relations depends rather heavily on the interpretation one favors for quantum mechanics as a whole. And because no agreement has been reached on this latter issue, one cannot expect agreement on the meaning of the uncertainty relations either.
2.4 Uncertainty relations or uncertainty principle?
Let us now move to another question about Heisenberg's relations: do they express a principle of quantum theory? Probably the first influential author to call these relations a ‘principle’ was Eddington, who, in his Gifford Lectures of 1928 referred to them as the ‘Principle of Indeterminacy’. In the English literature the name uncertainty principle became most common. It is used both by Condon and Robertson in 1929, and also in the English version of Heisenberg's Chicago Lectures (Heisenberg, 1930), although, remarkably, nowhere in the original German version of the same book (see also Cassidy, 1998). Indeed, Heisenberg never seems to have endorsed the name ‘principle’ for his relations. His favourite terminology was ‘inaccuracy relations’ (Ungenauigkeitsrelationen) or ‘indeterminacy relations’ (Unbestimmtheitsrelationen). We know only one passage, in Heisenberg's own Gifford lectures, delivered in 1955-56 (Heisenberg, 1958, p. 43), where he mentioned that his relations "are usually called relations of uncertainty or principle of indeterminacy". But this can well be read as his yielding to common practice rather than his own preference.
But does the relation (2) qualify as a principle of quantum mechanics? Several authors, foremost Karl Popper (1967), have contested this view. Popper argued that the uncertainty relations cannot be granted the status of a principle on the grounds that they are derivable from the theory, whereas one cannot obtain the theory from the uncertainty relations. (The argument being that one can never derive any equation, say, the Schrödinger equation, or the commutation relation (1), from an inequality.)
Popper's argument is, of course, correct but we think it misses the point. There are many statements in physical theories which are called principles even though they are in fact derivable from other statements in the theory in question. A more appropriate departing point for this issue is not the question of logical priority but rather Einstein's distinction between ‘constructive theories’ and ‘principle theories’.
Einstein proposed this famous classification in (Einstein, 1919). Constructive theories are theories which postulate the existence of simple entities behind the phenomena. They endeavour to reconstruct the phenomena by framing hypotheses about these entities. Principle theories, on the other hand, start from empirical principles, i.e. general statements of empirical regularities, employing no or only a bare minimum of theoretical terms. The purpose is to build up the theory from such principles. That is, one aims to show how these empirical principles provide sufficient conditions for the introduction of further theoretical concepts and structure.
The prime example of a theory of principle is thermodynamics. Here the role of the empirical principles is played by the statements of the impossibility of various kinds of perpetual motion machines. These are regarded as expressions of brute empirical fact, providing the appropriate conditions for the introduction of the concepts of energy and entropy and their properties. (There is a lot to be said about the tenability of this view, but that is not the topic of this entry.)
Now obviously, once the formal thermodynamic theory is built, one can also derive the impossibility of the various kinds of perpetual motion. (They would violate the laws of energy conservation and entropy increase.) But this derivation should not misguide one into thinking that they were no principles of the theory after all. The point is just that empirical principles are statements that do not rely on the theoretical concepts (in this case entropy and energy) for their meaning. They are interpretable independently of these concepts and, further, their validity on the empirical level still provides the physical content of the theory.
A similar example is provided by special relativity, another theory of principle, which Einstein deliberately designed after the ideal of thermodynamics. Here, the empirical principles are the light postulate and the relativity principle. Again, once we have built up the modern theoretical formalism of the theory (the Minkowski space-time) it is straightforward to prove the validity of these principles. But again this does not count as an argument for claiming that they were no principles after all. So the question whether the term ‘principle’ is justified for Heisenberg's relations, should, in our view, be understood as the question whether they are conceived of as empirical principles.
One can easily show that this idea was never far from Heisenberg's intentions. We have already seen that Heisenberg presented the relations as the result of a "pure fact of experience". A few months after his 1927 paper, he wrote a popular paper with the title "Ueber die Grundprincipien der Quantenmechanik" ("On the fundamental principles of quantum mechanics") where he made the point even more clearly. Here Heisenberg described his recent break-through in the interpretation of the theory as follows: "It seems to be a general law of nature that we cannot determine position and velocity simultaneously with arbitrary accuracy". Now actually, and in spite of its title, the paper does not identify or discuss any ‘fundamental principle’ of quantum mechanics. So, it must have seemed obvious to his readers that he intended to claim that the uncertainty relation was a fundamental principle, forced upon us as an empirical law of nature, rather than a result derived from the formalism of the theory.
This reading of Heisenberg's intentions is corroborated by the fact that, even in his 1927 paper, applications of his relation frequently present the conclusion as a matter of principle. For example, he says "In a stationary state of an atom its phase is in principle indeterminate" (Heisenberg, 1927, p. 177, [emphasis added]). Similarly, in a paper of 1928, he described the content of his relations as: "It has turned out that it is in principle impossible to know, to measure the position and velocity of a piece of matter with arbitrary accuracy. (Heisenberg, 1984, p. 26, [emphasis added])"
So, although Heisenberg did not originate the tradition of calling his relations a principle, it is not implausible to attribute the view to him that the uncertainty relations represent an empirical principle that could serve as a foundation of quantum mechanics. In fact, his 1927 paper expressed this desire explicitly: "Surely, one would like to be able to deduce the quantitative laws of quantum mechanics directly from their anschaulich foundations, that is, essentially, relation [(2)]" (ibid, p. 196). This is not to say that Heisenberg was successful in reaching this goal, or that he did not express other opinions on other occasions.
Let us conclude this section with three remarks. First, if the uncertainty relation is to serve as an empirical principle, one might well ask what its direct empirical support is. In Heisenberg's analysis, no such support is mentioned. His arguments concerned thought experiments in which the validity of the theory, at least at a rudimentary level, is implicitly taken for granted. Jammer (1974, p. 82) conducted a literature search for high precision experiments that could seriously test the uncertainty relations and concluded they were still scarce in 1974. Real experimental support for the uncertainty relations in experiments in which the inaccuracies are close to the quantum limit have come about only more recently. (See Kaiser, Werner and George 1983, Uffink 1985, Nairz, Andt, and Zeilinger, 2001.)
Third, it is remarkable that in his later years Heisenberg put a somewhat different gloss on his relations. In his autobiography Der Teil und das Ganze of 1969 he described how he had found his relations inspired by a remark by Einstein that "it is the theory which decides what one can observe" -- thus giving precedence to theory above experience, rather than the other way around. Some years later he even admitted that his famous discussions of thought experiments were actually trivial since "… if the process of observation itself is subject to the laws of quantum theory, it must be possible to represent its result in the mathematical scheme of this theory" (Heisenberg, 1975, p. 6).
2.5 Mathematical elaboration
When Heisenberg introduced his relation, his argument was based only on qualitative examples. He did not provide a general, exact derivation of his relations.[3] Indeed, he did not even give a definition of the uncertainties δq, etc., occurring in these relations. Of course, this was consistent with the announced goal of that paper, i.e. to provide some qualitative understanding of quantum mechanics for simple experiments.
The first mathematically exact formulation of the uncertainty relations is due to Kennard. He proved in 1927 the theorem that for all normalized state vectors |ψ> the following inequality holds:
Δψp Δψq ≥ ℏ/2 (9)
Here, Δψp and Δψq are standard deviations of position and momentum in the state vector |ψ>, i.e.,
ψp)² = <p²>ψ − (<p>ψ)², (Δψq)² = <q²>ψ − (<q>ψ)². (10)
where <·>ψ = <ψ|·|ψ> denotes the expectation value in state |ψ>. The inequality (9) was generalized in 1929 by Robertson who proved that for all observables (self-adjoint operators) A and B
ΔψA ΔψB ≥ ½|<[A,B]> ψ| (11)
where [A, B] := AB − BA denotes the commutator. This relation was in turn strengthened by Schrödinger (1930), who obtained:
ψA)² (ΔψB)² ≥
¼|<[A,B]> ψ|² + ¼|<{A−<A> ψ, B−<B> ψ}>ψ|²
where {A, B} := (AB + BA) denotes the anti-commutator.
Since the above inequalities have the virtue of being exact and general, in contrast to Heisenberg's original semi-quantitative formulation, it is tempting to regard them as the exact counterpart of Heisenberg's relations (2)-(4). Indeed, such was Heisenberg's own view. In his Chicago Lectures (Heisenberg 1930, pp. 15-19), he presented Kennard's derivation of relation (9) and claimed that "this proof does not differ at all in mathematical content" from the semi-quantitative argument he had presented earlier, the only difference being that now "the proof is carried through exactly".
But it may be useful to point out that both in status and intended role there is a difference between Kennard's inequality and Heisenberg's previous formulation (2). The inequalities discussed in the present section are not statements of empirical fact, but theorems of the quantum mechanical formalism. As such, they presuppose the validity of this formalism, and in particular the commutation relation (1), rather than elucidating its intuitive content or to create ‘room’ or ‘freedom’ for the validity of this relation. At best, one should see the above inequalities as showing that the formalism is consistent with Heisenberg's empirical principle.
This situation is similar to that arising in other theories of principle where, as noted in Section 2.4, one often finds that, next to an empirical principle, the formalism also provides a corresponding theorem. And similarly, this situation should not, by itself, cast doubt on the question whether Heisenberg's relation can be regarded as a principle of quantum mechanics.
There is a second notable difference between (2) and (9). Heisenberg did not give a general definition for the ‘uncertainties’ δp and δq. The most definite remark he made about them was that they could be taken as "something like the mean error". In the discussions of thought experiments, he and Bohr would always quantify uncertainties on a case-to-case basis by choosing some parameters which happened to be relevant to the experiment at hand. By contrast, the inequalities (9)-(12) employ a single specific expression as a measure for ‘uncertainty’: the standard deviation. At the time, this choice was not unnatural, given that this expression is well-known and widely used in error theory and the description of statistical fluctuations. However, there was very little or no discussion of whether this choice was appropriate for a general formulation of the uncertainty relations. A standard deviation reflects the spread or expected fluctuations in a series of measurements of an observable in a given state. It is not at all easy to connect this idea with the concept of the ‘inaccuracy’ of a measurement, such as the resolving power of a microscope. In fact, even though Heisenberg had taken Kennard's inequality as the precise formulation of the uncertainty relation, he and Bohr never relied on standard deviations in their many discussions of thought experiments, and indeed, it has been shown (Uffink and Hilgevoord, 1985; Hilgevoord and Uffink, 1988) that these discussions cannot be framed in terms of standard deviation.
Another problem with the above elaboration is that the ‘well-known’ relations (5) are actually false if energy E and action J are to be positive operators (Jordan 1927). In that case, self-adjoint operators t and w do not exist and inequalities analogous to (9) cannot be derived. Also, these inequalities do not hold for angle and angular momentum (Uffink 1990). These obstacles have led to a quite extensive literature on time-energy and angle-action uncertainty relations (Muga et al. 2002, Hilgevoord 2005).
3. Bohr
In spite of the fact that Heisenberg's and Bohr's views on quantum mechanics are often lumped together as (part of) ‘the Copenhagen interpretation’, there is considerable difference between their views on the uncertainty relations.
3.1 From wave-particle duality to complementarity
Long before the development of modern quantum mechanics, Bohr had been particularly concerned with the problem of particle-wave duality, i.e. the problem that experimental evidence on the behaviour of both light and matter seemed to demand a wave picture in some cases, and a particle picture in others. Yet these pictures are mutually exclusive. Whereas a particle is always localized, the very definition of the notions of wavelength and frequency requires an extension in space and in time. Moreover, the classical particle picture is incompatible with the characteristic phenomenon of interference.
His long struggle with wave-particle duality had prepared him for a radical step when the dispute between matrix and wave mechanics broke out in 1926-27. For the main contestants, Heisenberg and Schrödinger, the issue at stake was which view could claim to provide a single coherent and universal framework for the description of the observational data. The choice was, essentially between a description in terms of continuously evolving waves, or else one of particles undergoing discontinuous quantum jumps. By contrast, Bohr insisted that elements from both views were equally valid and equally needed for an exhaustive description of the data. His way out of the contradiction was to renounce the idea that the pictures refer, in a literal one-to-one correspondence, to physical reality. Instead, the applicability of these pictures was to become dependent on the experimental context. This is the gist of the viewpoint he called ‘complementarity’.
Bohr first conceived the general outline of his complementarity argument in early 1927, during a skiing holiday in Norway, at the same time when Heisenberg wrote his uncertainty paper. When he returned to Copenhagen and found Heisenberg's manuscript, they got into an intense discussion. On the one hand, Bohr was quite enthusiastic about Heisenberg's ideas which seemed to fit wonderfully with his own thinking. Indeed, in his subsequent work, Bohr always presented the uncertainty relations as the symbolic expression of his complementarity viewpoint. On the other hand, he criticized Heisenberg severely for his suggestion that these relations were due to discontinuous changes occurring during a measurement process. Rather, Bohr argued, their proper derivation should start from the indispensability of both particle and wave concepts. He pointed out that the uncertainties in the experiment did not exclusively arise from the discontinuities but also from the fact that in the experiment we need to take into account both the particle theory and the wave theory. It is not so much the unknown disturbance which renders the momentum of the electron uncertain but rather the fact that the position and the momentum of the electron cannot be simultaneously defined in this experiment. (See the "Addition in Proof" to Heisenberg's paper.)
We shall not go too deeply into the matter of Bohr's interpretation of quantum mechanics since we are mostly interested in Bohr's view on the uncertainty principle. For a more detailed discussion of Bohr's philosophy of quantum physics we refer to Scheibe (1973), Folse (1985), Honner (1987) and Murdoch (1987). It may be useful, however, to sketch some of the main points. Central in Bohr's considerations is the language we use in physics. No matter how abstract and subtle the concepts of modern physics may be, they are essentially an extension of our ordinary language and a means to communicate the results of our experiments. These results, obtained under well-defined experimental circumstances, are what Bohr calls the "phenomena". A phenomenon is "the comprehension of the effects observed under given experimental conditions" (Bohr 1939, p. 24), it is the resultant of a physical object, a measuring apparatus and the interaction between them in a concrete experimental situation. The essential difference between classical and quantum physics is that in quantum physics the interaction between the object and the apparatus cannot be made arbitrarily small; the interaction must at least comprise one quantum. This is expressed by Bohr's quantum postulate:
[… the] essence [of the formulation of the quantum theory] may be expressed in the so-called quantum postulate, which attributes to any atomic process an essential discontinuity or rather individuality, completely foreign to classical theories and symbolized by Planck's quantum of action. (Bohr, 1928, p. 580)
A phenomenon, therefore, is an indivisible whole and the result of a measurement cannot be considered as an autonomous manifestation of the object itself independently of the measurement context. The quantum postulate forces upon us a new way of describing physical phenomena:
In this situation, we are faced with the necessity of a radical revision of the foundation for the description and explanation of physical phenomena. Here, it must above all be recognized that, however far quantum effects transcend the scope of classical physical analysis, the account of the experimental arrangement and the record of the observations must always be expressed in common language supplemented with the terminology of classical physics. (Bohr, 1948, p. 313)
This is what Scheibe (1973) has called the "buffer postulate" because it prevents the quantum from penetrating into the classical description: A phenomenon must always be described in classical terms; Planck's constant does not occur in this description.
Together, the two postulates induce the following reasoning. In every phenomenon the interaction between the object and the apparatus comprises at least one quantum. But the description of the phenomenon must use classical notions in which the quantum of action does not occur. Hence, the interaction cannot be analysed in this description. On the other hand, the classical character of the description allows to speak in terms of the object itself. Instead of saying: ‘the interaction between a particle and a photographic plate has resulted in a black spot in a certain place on the plate’, we are allowed to forgo mentioning the apparatus and say: ‘the particle has been found in this place’. The experimental context, rather than changing or disturbing pre-existing properties of the object, defines what can meaningfully be said about the object.
Because the interaction between object and apparatus is left out in our description of the phenomenon, we do not get the whole picture. Yet, any attempt to extend our description by performing the measurement of a different observable quantity of the object, or indeed, on the measurement apparatus, produces a new phenomenon and we are again confronted with the same situation. Because of the unanalyzable interaction in both measurements, the two descriptions cannot, generally, be united into a single picture. They are what Bohr calls complementary descriptions:
[the quantum of action]...forces us to adopt a new mode of description designated as complementary in the sense that any given application of classical concepts precludes the simultaneous use of other classical concepts which in a different connection are equally necessary for the elucidation of the phenomena. (Bohr, 1929, p. 10)
The most important example of complementary descriptions is provided by the measurements of the position and momentum of an object. If one wants to measure the position of the object relative to a given spatial frame of reference, the measuring instrument must be rigidly fixed to the bodies which define the frame of reference. But this implies the impossibility of investigating the exchange of momentum between the object and the instrument and we are cut off from obtaining any information about the momentum of the object. If, on the other hand, one wants to measure the momentum of an object the measuring instrument must be able to move relative to the spatial reference frame. Bohr here assumes that a momentum measurement involves the registration of the recoil of some movable part of the instrument and the use of the law of momentum conservation. The looseness of the part of the instrument with which the object interacts entails that the instrument cannot serve to accurately determine the position of the object. Since a measuring instrument cannot be rigidly fixed to the spatial reference frame and, at the same time, be movable relative to it, the experiments which serve to precisely determine the position and the momentum of an object are mutually exclusive. Of course, in itself, this is not at all typical for quantum mechanics. But, because the interaction between object and instrument during the measurement can neither be neglected nor determined the two measurements cannot be combined. This means that in the description of the object one must choose between the assignment of a precise position or of a precise momentum.
Similar considerations hold with respect to the measurement of time and energy. Just as the spatial coordinate system must be fixed by means of solid bodies so must the time coordinate be fixed by means of unperturbable, synchronised clocks. But it is precisely this requirement which prevents one from taking into account of the exchange of energy with the instrument if this is to serve its purpose. Conversely, any conclusion about the object based on the conservation of energy prevents following its development in time.
The conclusion is that in quantum mechanics we are confronted with a complementarity between two descriptions which are united in the classical mode of description: the space-time description (or coordination) of a process and the description based on the applicability of the dynamical conservation laws. The quantum forces us to give up the classical mode of description (also called the ‘causal’ mode of description by Bohr[4]): it is impossible to form a classical picture of what is going on when radiation interacts with matter as, e.g., in the Compton effect.
Any arrangement suited to study the exchange of energy and momentum between the electron and the photon must involve a latitude in the space-time description sufficient for the definition of wave-number and frequency which enter in the relation [E = hν and p = hσ]. Conversely, any attempt of locating the collision between the photon and the electron more accurately would, on account of the unavoidable interaction with the fixed scales and clocks defining the space-time reference frame, exclude all closer account as regards the balance of momentum and energy. (Bohr, 1949, p. 210)
A causal description of the process cannot be attained; we have to content ourselves with complementary descriptions. "The viewpoint of complementarity may be regarded", according to Bohr, "as a rational generalization of the very ideal of causality".
In addition to complementary descriptions Bohr also talks about complementary phenomena and complementary quantities. Position and momentum, as well as time and energy, are complementary quantities.[5]
We have seen that Bohr's approach to quantum theory puts heavy emphasis on the language used to communicate experimental observations, which, in his opinion, must always remain classical. By comparison, he seemed to put little value on arguments starting from the mathematical formalism of quantum theory. This informal approach is typical of all of Bohr's discussions on the meaning of quantum mechanics. One might say that for Bohr the conceptual clarification of the situation has primary importance while the formalism is only a symbolic representation of this situation.
This is remarkable since, finally, it is the formalism which needs to be interpreted. This neglect of the formalism is one of the reasons why it is so difficult to get a clear understanding of Bohr's interpretation of quantum mechanics and why it has aroused so much controversy. We close this section by citing from an article of 1948 to show how Bohr conceived the role of the formalism of quantum mechanics:
The entire formalism is to be considered as a tool for deriving predictions, of definite or statistical character, as regards information obtainable under experimental conditions described in classical terms and specified by means of parameters entering into the algebraic or differential equations of which the matrices or the wave-functions, respectively, are solutions. These symbols themselves, as is indicated already by the use of imaginary numbers, are not susceptible to pictorial interpretation; and even derived real functions like densities and currents are only to be regarded as expressing the probabilities for the occurrence of individual events observable under well-defined experimental conditions. (Bohr, 1948, p. 314)
3.2 Bohr's view on the uncertainty relations
In his Como lecture, published in 1928, Bohr gave his own version of a derivation of the uncertainty relations between position and momentum and between time and energy. He started from the relations
E = hν and p = h (13)
which connect the notions of energy E and momentum p from the particle picture with those of frequency ν and wavelength λ from the wave picture. He noticed that a wave packet of limited extension in space and time can only be built up by the superposition of a number of elementary waves with a large range of wave numbers and frequencies. Denoting the spatial and temporal extensions of the wave packet by Δx and Δt, and the extensions in the wave number σ := 1/λ and frequency by Δσ and Δν, it follows from Fourier analysis that in the most favorable case Δx Δσ ≈ Δt Δν ≈ 1, and, using (13), one obtains the relations
Δt ΔE ≈ Δx Δph (14)
Note that Δx, Δσ, etc., are not standard deviations but unspecified measures of the size of a wave packet. (The original text has equality signs instead of approximate equality signs, but, since Bohr does not define the spreads exactly the use of approximate equality signs seems more in line with his intentions. Moreover, Bohr himself used approximate equality signs in later presentations.) These equations determine, according to Bohr: "the highest possible accuracy in the definition of the energy and momentum of the individuals associated with the wave field" (Bohr 1928, p. 571). He noted, "This circumstance may be regarded as a simple symbolic expression of the complementary nature of the space-time description and the claims of causality" (ibid).[6] We note a few points about Bohr's view on the uncertainty relations. First of all, Bohr does not refer to discontinuous changes in the relevant quantities during the measurement process. Rather, he emphasizes the possibility of defining these quantities. This view is markedly different from Heisenberg's. A draft version of the Como lecture is even more explicit on the difference between Bohr and Heisenberg:
These reciprocal uncertainty relations were given in a recent paper of Heisenberg as the expression of the statistical element which, due to the feature of discontinuity implied in the quantum postulate, characterizes any interpretation of observations by means of classical concepts. It must be remembered, however, that the uncertainty in question is not simply a consequence of a discontinuous change of energy and momentum say during an interaction between radiation and material particles employed in measuring the space-time coordinates of the individuals. According to the above considerations the question is rather that of the impossibility of defining rigourously such a change when the space-time coordination of the individuals is also considered. (Bohr, 1985 p. 93)
Indeed, Bohr not only rejected Heisenberg's argument that these relations are due to discontinuous disturbances implied by the act of measuring, but also his view that the measurement process creates a definite result:
Nor did he approve of an epistemological formulation or one in terms of experimental inaccuracies:
[…] a sentence like "we cannot know both the momentum and the position of an atomic object" raises at once questions as to the physical reality of two such attributes of the object, which can be answered only by referring to the mutual exclusive conditions for an unambiguous use of space-time concepts, on the one hand, and dynamical conservation laws on the other hand. (Bohr, 1948, p. 315; also Bohr 1949, p. 211)
It would in particular not be out of place in this connection to warn against a misunderstanding likely to arise when one tries to express the content of Heisenberg's well-known indeterminacy relation by such a statement as ‘the position and momentum of a particle cannot simultaneously be measured with arbitrary accuracy’. According to such a formulation it would appear as though we had to do with some arbitrary renunciation of the measurement of either the one or the other of two well-defined attributes of the object, which would not preclude the possibility of a future theory taking both attributes into account on the lines of the classical physics. (Bohr 1937, p. 292)
Instead, Bohr always stressed that the uncertainty relations are first and foremost an expression of complementarity. This may seem odd since complementarity is a dichotomic relation between two types of description whereas the uncertainty relations allow for intermediate situations between two extremes. They "express" the dichotomy in the sense that if we take the energy and momentum to be perfectly well-defined, symbolically ΔE = Δp = 0, the postion and time variables are completely undefined, Δx = Δt = ∞, and vice versa. But they also allow intermediate situations in which the mentioned uncertainties are all non-zero and finite. This more positive aspect of the uncertainty relation is mentioned in the Como lecture:
At the same time, however, the general character of this relation makes it possible to a certain extent to reconcile the conservation laws with the space-time coordination of observations, the idea of a coincidence of well-defined events in space-time points being replaced by that of unsharply defined individuals within space-time regions. (Bohr 1928, p. 571)
However, Bohr never followed up on this suggestion that we might be able to strike a compromise between the two mutually exclusive modes of description in terms of unsharply defined quantities. Indeed, an attempt to do so, would take the formalism of quantum theory more seriously than the concepts of classical language, and this step Bohr refused to take. Instead, in his later writings he would be content with stating that the uncertainty relations simply defy an unambiguous interpretation in classical terms:
These so-called indeterminacy relations explicitly bear out the limitation of causal analysis, but it is important to recognize that no unambiguous interpretation of such a relation can be given in words suited to describe a situation in which physical attributes are objectified in a classical way. (Bohr, 1948, p.315)
It must here be remembered that even in the indeterminacy relation [Δq Δph] we are dealing with an implication of the formalism which defies unambiguous expression in words suited to describe classical pictures. Thus a sentence like "we cannot know both the momentum and the position of an atomic object" raises at once questions as to the physical reality of two such attributes of the object, which can be answered only by referring to the conditions for an unambiguous use of space-time concepts, on the one hand, and dynamical conservation laws on the other hand. (Bohr, 1949, p. 211)
Finally, on a more formal level, we note that Bohr's derivation does not rely on the commutation relations (1) and (5), but on Fourier analysis. These two approaches are equivalent as far as the relationship between position and momentum is concerned, but this is not so for time and energy since most physical systems do not have a time operator. Indeed, in his discussion with Einstein (Bohr, 1949), Bohr considered time as a simple classical variable. This even holds for his famous discussion of the ‘clock-in-the-box’ thought-experiment where the time, as defined by the clock in the box, is treated from the point of view of classical general relativity. Thus, in an approach based on commutation relations, the position-momentum and time-energy uncertainty relations are not on equal footing, which is contrary to Bohr's approach in terms of Fourier analysis (Hilgevoord 1996 and 1998).
4. The Minimal Interpretation
In the previous two sections we have seen how both Heisenberg and Bohr attributed a far-reaching status to the uncertainty relations. They both argued that these relations place fundamental limits on the applicability of the usual classical concepts. Moreover, they both believed that these limitations were inevitable and forced upon us. However, we have also seen that they reached such conclusions by starting from radical and controversial assumptions. This entails, of course, that their radical conclusions remain unconvincing for those who reject these assumptions. Indeed, the operationalist-positivist viewpoint adopted by these authors has long since lost its appeal among philosophers of physics.
So the question may be asked what alternative views of the uncertainty relations are still viable. Of course, this problem is intimately connected with that of the interpretation of the wave function, and hence of quantum mechanics as a whole. Since there is no consensus about the latter, one cannot expect consensus about the interpretation of the uncertainty relations either. Here we only describe a point of view, which we call the ‘minimal interpretation’, that seems to be shared by both the adherents of the Copenhagen interpretation and of other views.
In quantum mechanics a system is supposed to be described by its quantum state, also called its state vector. Given the state vector, one can derive probability distributions for all the physical quantities pertaining to the system such as its position, momentum, angular momentum, energy, etc. The operational meaning of these probability distributions is that they correspond to the distribution of the values obtained for these quantities in a long series of repetitions of the measurement. More precisely, one imagines a great number of copies of the system under consideration, all prepared in the same way. On each copy the momentum, say, is measured. Generally, the outcomes of these measurements differ and a distribution of outcomes is obtained. The theoretical momentum distribution derived from the quantum state is supposed to coincide with the hypothetical distribution of outcomes obtained in an infinite series of repetitions of the momentum measurement. The same holds, mutatis mutandis, for all the other physical quantities pertaining to the system. Note that no simultaneous measurements of two or more quantities are required in defining the operational meaning of the probability distributions.
Uncertainty relations can be considered as statements about the spreads of the probability distributions of the several physical quantities arising from the same state. For example, the uncertainty relation between the position and momentum of a system may be understood as the statement that the position and momentum distributions cannot both be arbitrarily narrow -- in some sense of the word "narrow" -- in any quantum state. Inequality (9) is an example of such a relation in which the standard deviation is employed as a measure of spread. From this characterization of uncertainty relations follows that a more detailed interpretation of the quantum state than the one given in the previous paragraph is not required to study uncertainty relations as such. In particular, a further ontological or linguistic interpretation of the notion of uncertainty, as limits on the applicability of our concepts given by Heisenberg or Bohr, need not be supposed.
Indeed, this minimal interpretation leaves open whether it makes sense to attribute precise values of position and momentum to an individual system. Some interpretations of quantum mechanics, e.g. those of Heisenberg and Bohr, deny this; while others, e.g. the interpretation of de Broglie and Bohm insist that each individual system has a definite position and momentum (see the entry on Bohmian mechanics). The only requirement is that, as an empirical fact, it is not possible to prepare pure ensembles in which all systems have the same values for these quantities, or ensembles in which the spreads are smaller than allowed by quantum theory. Although interpretations of quantum mechanics, in which each system has a definite value for its position and momentum are still viable, this is not to say that they are without strange features of their own; they do not imply a return to classical physics.
We end with a few remarks on this minimal interpretation. First, it may be noted that the minimal interpretation of the uncertainty relations is little more than filling in the empirical meaning of inequality (9), or an inequality in terms of other measures of width, as obtained from the standard formalism of quantum mechanics. As such, this view shares many of the limitations we have noted above about this inequality. Indeed, it is not straightforward to relate the spread in a statistical distribution of measurement results with the inaccuracy of this measurement, such as, e.g. the resolving power of a microscope. Moreover, the minimal interpretation does not address the question whether one can make simultaneous accurate measurements of position and momentum. As a matter of fact, one can show that the standard formalism of quantum mechanics does not allow such simultaneous measurements. But this is not a consequence of relation (9).
If one feels that statements about inaccuracy of measurement, or the possibility of simultaneous measurements, belong to any satisfactory formulation of the uncertainty principle, the minimal interpretation may thus be too minimal.
• Beller, M. (1999) Quantum Dialogue (Chicago: University of Chicago Press).
• Bohr, N. (1928) ‘The Quantum postulate and the recent development of atomic theory’ Nature (Supplement) 121 580-590. Also in (Bohr, 1934), (Wheeler and Zurek, 1983), and in (Bohr, 1985).
• Bohr, N. (1929) ‘Introductory survey’ in (Bohr, 1934), pp. 1-24.
• Bohr, N. (1934) Atomic Theory and the Description of Nature (Cambridge: Cambridge University Press). Reissued in 1961. Appeared also as Volume I of The Philosophical Writings of Niels Bohr (Woodbridge Connecticut: Ox Bow Press, 1987).
• Bohr, N. (1937) ‘Causality and complementarity’ Philosophy of Science 4 289-298.
• Bohr, N. (1939) ‘The causality problem in atomic physics’ in New Theories in Physics (Paris: International Institute of Intellectual Co-operation.
• Bohr, N. (1939) ‘The causality problem in atomic physics’ in New Theories in Physics (Paris: International Institute of Intellectual Co-operation). Also in (Bohr, 1996), pp. 303-322.
• Bohr, N. (1948) ‘On the notions of causality and complementarity’ Dialectica 2 312-319. Also in (Bohr, 1996) pp. 330-337.
• Bohr, N. (1949) ‘Discussion with Einstein on epistemological problems in atomic physics’ In Albert Einstein: philosopher-scientist. The library of living philosophers Vol. VII, P.A. Schilpp (ed.), (La Salle: Open Court) pp. 201-241.
• Bohr, N. (1985) Collected Works Volume 6, J. Kalckar (ed.) (Amsterdam: North-Holland).
• Bohr, N.(1996) Collected Works Volume 7, J. Kalckar (ed.) (Amsterdam: North-Holland).
• Bub, J. (2000) ‘Quantum mechanics as a principle theory’ Studies in History and Philosophy of Modern Physics 31B 75-94.
• Cassidy, D.C. (1992) Uncertainty, the Life and Science of Werner Heisenberg (New York: Freeman).
• Cassidy, D.C. (1998) ‘Answer to the question: When did the indeterminacy principle become the uncertainty principle?’ American Journal of Physics 66 278-279.
• Condon, E.U. (1929) ‘Remarks on uncertainty principles’ Science 69 573-574.
• Eddington, A. (1928) The Nature of the Physical World, (Cambridge: Cambridge University Press).
• Einstein, A. (1919) ‘My Theory’, The London Times, November 28, p. 13. Reprinted as ‘What is the theory of relativity?’ in Ideas and Opinions (New York: Crown Publishers, 1954) pp. 227-232.
• Folse, H.J. (1985) The Philosophy of Niels Bohr (Amsterdam: Elsevier).
• Heisenberg, W. (1925) ‘Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen’ Zeitschrift für Physik 33 879-893.
• Heisenberg, W. (1926) ‘Quantenmechanik’ Die Naturwissenschaften 14 899-894.
• Heisenberg, W. (1927) ‘Ueber den anschaulichen Inhalt der quantentheoretischen Kinematik and Mechanik’ Zeitschrift für Physik 43 172-198. English translation in (Wheeler and Zurek, 1983), pp. 62-84.
• Heisenberg, W. (1927) ‘Ueber die Grundprincipien der "Quantenmechanik" ‘ Forschungen und Fortschritte 3 83.
• Heisenberg, W. (1928) ‘Erkenntnistheoretische Probleme der modernen Physik’ in (Heisenberg, 1984), pp. 22-28.
• Heisenberg W. (1930) Die Physikalischen Prinzipien der Quantenmechanik (Leipzig: Hirzel). English translation The Physical Principles of Quantum Theory (Chicago: University of Chicago Press, 1930).
• Heisenberg, W. (1931) ‘Die Rolle der Unbestimmtheitsrelationen in der modernen Physik’ Monatshefte für Mathematik und Physik 38 365-372.
• Heisenberg, W. (1958) Physics and Philosophy (New York: Harper).
• Heisenberg, W. (1969) Der Teil und das Ganze (München : Piper).
• Heisenberg, W. (1975) ‘Bemerkungen über die Entstehung der Unbestimmtheitsrelation’ Physikalische Blätter 31 193-196. English translation in (Price and Chissick, 1977).
• Heisenberg W. (1984) Gesammelte Werke Volume C1, W. Blum, H.-P. Dürr and H. Rechenberg (eds) (München: Piper).
• Hilgevoord, J. and Uffink, J. (1988) ‘The mathematical expression of the uncertainty principle’ in Microphysical Reality and Quantum Description, A. van der Merwe et al. (eds.), (Dordrecht: Kluwer) pp. 91-114.
• Hilgevoord, J. and Uffink, J. (1990) ‘ A new view on the uncertainty principle’ In Sixty-Two years of Uncertainty, Historical and Physical Inquiries into the Foundations of Quantum Mechanics, A.E. Miller (ed.), (New York, Plenum) pp. 121-139.
• Hilgevoord, J. (1996) ‘The uncertainty principle for energy and time I’ American Journal of Physics 64, 1451-1456.
• Hilgevoord, J. (1998) ‘The uncertainty principle for energy and time II’ American Journal of Physics 66, 396-402.
• Hilgevoord, J. (2002) ‘Time in quantum mechanics’ American Journal of Physics 70 301-306.
• Hilgevoord, J. (2005) ‘Time in quantum mechanics: a story of confusion. Studies in History and Philosophy of Modern Physics 36 29-60.
• Jammer, M. (1974) The Philosophy of Quantum Mechanics (New York: Wiley).
• Jordan, P. (1927) ‘Über eine neue Begründung der Quantenmechanik II’ Zeitschrift für Physik 44 1-25.
• Kaiser, H., Werner, S.A., and George, E.A. (1983) ‘Direct measurement of the longitudinal coherence length of a thermal neutron beam’ Physical Review Letters 50 560.
• Kennard E.H. (1927) ‘Zur Quantenmechanik einfacher Bewegungstypen’ Zeitschrift für Physik, 44 326-352.
• Miller, A.I. (1982) ‘Redefining Anschaulichkeit’ in: A. Shimony and H.Feshbach (eds) Physics as Natural Philosophy (Cambridge Mass.: MIT Press).
• Muga, J.G., Sala Mayato, R. and Egusquiza, I.L. (Eds.) (2002). Time in quantum mechanics: Berlin: Springer.
• Muller, F.A. (1997) ‘The equivalence myth of quantum mechanics’ Studies in History and Philosophy of Modern Physics 28 35-61, 219-247, ibid. 30 (1999) 543-545.
• Murdoch, D. (1987) Niels Bohr's Philosophy of Physics (Cambridge: Cambridge University Press).
• Nairz, O. Andt, M. and Zeilinger A. (2002) ‘Experimental verification of the Heisenberg uncertainty principle for fullerene molecules’ Physical Review A 65 , 032109.
• Pauli, W. (1979) Wissentschaftlicher Briefwechsel mit Bohr, Einstein, Heisenberg u.a. Volume 1 (1919-1929) A. Hermann, K. von Meyenn and V.F. Weiskopf (eds) (Berlin: Springer).
• Popper, K. (1967) ‘Quantum mechanics without "the observer"’ in M. Bunge (ed.) Quantum Theory and Reality (Berlin: Springer).
• Price, W.C. and Chissick, S.S (eds) (1977) The Uncertainty Principle and the Foundations of Quantum Mechanics, (New York: Wiley).
• Regt, H. de (1997) ‘Erwin Schrödinger, Anschaulichkeit, and quantum theory’ Studies in History and Philosophy of Modern Physics 28 461-481.
• Robertson, H.P. (1929) ‘The uncertainty principle’ Physical Review 34 573-574. Reprinted in Wheeler and Zurek (1983) pp. 127-128.
• Scheibe, E. (1973) The Logical Analysis of Quantum Mechanics (Oxford: Pergamon Press).
• Schrödinger, E. (1930) ‘Zum Heisenbergschen Unschärfeprinzip’ Berliner Berichte 296-303.
• Uffink, J. (1985) ‘Verification of the uncertainty principle in neutron interferometry’ Physics Letters 108 A 59-62.
• Uffink, J. (1990) Measures of Uncertainty and the Uncertainty Principle PhD thesis, University of Utrecht.
• Uffink, J. (1994) ‘The joint measurement problem’ International Journal of Theoretical Physics 33 (1994) 199-212.
• Uffink, J. and Hilgevoord, J. (1985) ‘Uncertainty principle and uncertainty relations’ Foundations of Physics 15 925-944.
• Wheeler, J.A. and Zurek, W.H. (eds) (1983) Quantum Theory and Measurement (Princeton NJ: Princeton University Press).
Other Internet Resources
Copyright © 2006 by
Jan Hilgevoord
Jos Uffink <> |
79ad7b6f7842dbbf | Euclidean quantum gravity
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Introduction in layman terms [edit]
In physics, a Wick rotation, named after Gian-Carlo Wick, is a method of finding a solution to dynamics problems in n dimensions, by transposing their descriptions in n+1 dimensions, by trading one dimension of space for one dimension of time. For the more mathematically inclined people it substitutes a mathematical problem in Minkowski space to a related problem in Euclidean space by means of a transformation that substitutes an imaginary-number variable for a real-number variable.
It is called a rotation because when we represent complex numbers as a plane, the multiplication of a complex number by i is equivalent to rotating the vector representing that number by an angle of pi/2 about the origin.
For example, a Wick rotation could be used to relate a macroscopic event temperature diffusion (like in a bath) to the underlying thermal movements of molecules. If we attempt to modelize the bath volume with the different gradients of temperature we would have to subdivize this volume in infinetesimal volumes and see how they interact. We know such infenitesimal volumes are in fact water molecules. If we represent all molecules in the bath by only one molecule in an attempt to simplify and manage easily this problem, this unique molecule should walk along all possible paths that every real molecules will follow. Path integral is the conceptual tool used to describe the movements of this unique molecule, and Wick rotation is one of the mathematical tool that are very useful to analyse an integral path problem.
In a somewhat similar manner, the motion of a quantum object as described by quantum mechanics implies that it can exist simultaneously in different positions and different speeds. It differs clearly to the movement of a classical object (e.g., a billiard ball), since in this case a single path with precise position and speed can be described. An quantum object does not move from A to B with a single path, but it moves from A to B by all ways possible at the same time. According to the principle of superposition (Richard Feynman's integral of path in 1963), the path of the quantum object is described mathematically as a weighted average of all those possible paths. In 1966 an explicitly gauge invariant functional-integral algorithm was found by DeWitt, which extended Feynman’s new rules to all orders. What is appealling in this new approach is its lack of singularities when they are un-avoidable in General Relativity.
Another operational problem with General Relativity is the difficulty to do calculations, because of the complexity of the mathematical tools used. Integral of path in contrast is used in mechanics since the end of the 19th century and is well known. In addition Path integral is a formalism used both in mechanics and quantum theories so it might be a good starting point for unifying General Relativity and Quantum theories. Some quantum features like the Schrödinger equation and the heat equation are also related by Wick rotation. So the Wick relation is a good tool to relate a classical phenomena to a quantum phenomena. The ambition of Euclidian quantum gravity is to use the Wick rotation to find connections between a macroscopic phenomena, the gravity, to something more microscopic.
More rigorous treatment [edit]
Euclidean quantum gravity refers to a Wick rotated version of quantum gravity, formulated as a quantum field theory. The manifolds that are used in this formulation are 4 dimensional Riemannian manifolds instead of pseudo Riemannian manifolds. It is also assumed that the manifolds are compact, connected and boundaryless (i.e. no singularities). Following the usual quantum field-theoretic formulation, the vacuum to vacuum amplitude is written as a functional integral over the metric tensor, which is now the quantum field under consideration.
\int \mathcal{D}\bold{g}\, \mathcal{D}\phi\, \exp\left(-\int d^4x \sqrt{|\bold{g}|}(R+\mathcal{L}_\mathrm{matter})\right)
where φ denotes all the matter fields. See Einstein-Hilbert action.
Relation to ADM Formalism [edit]
Euclidean Quantum Gravity does relate back to ADM formalism used in canonical quantum gravity and recovers the Wheeler–DeWitt equation under various circumstances. If we have some matter field \phi, then the path integral reads
where integration over \mathcal{D}\bold{g} includes an integration over the three-metric, the lapse function N, and shift vector N^{a}. But we demand that Z be independent of the lapse function and shift vector at the boundaries, so we obtain
\frac{\delta Z}{\delta N}=0=\int \mathcal{D}\bold{g}\, \mathcal{D}\phi\, \left.\frac{\delta S}{\delta N}\right|_{\Sigma} \exp\left(-\int d^4x \sqrt{|\bold{g}|}(R+\mathcal{L}_\mathrm{matter})\right)
where \Sigma is the three-dimensional boundary. Observe that this expression vanishes implies the functional derivative vanishes, giving us the Wheeler-DeWitt equation. A similar statement may be made for the Diffeomorphism constraint (take functional derivative with respect to the shift functions instead).
References [edit]
• Arundhati Dasgupta, "The Measure in Euclidean Quantum Gravity." Eprint arXiv:1106.1679.
• Arundhati Dasgupta, "The gravitational path integral and trace of the diffeomorphisms." Gen.Rel.Grav. 43 (2011) 2237–2255. Eprint arXiv:0801.4770.
• Bryce S. DeWitt, Giampiero Esposito, "An introduction to quantum gravity." Int.J.Geom.Meth.Mod.Phys. 5 (2008) 101–156. Eprint arXiv:0711.2445.
• G. W. Gibbons and S. W. Hawking (eds.), Euclidean quantum gravity, World Scientific (1993)
• J. B. Hartle and S. W. Hawking, "Wave function of the Universe." Phys. Rev. D 28 (1983) 2960–2975, eprint. Formally relates Euclidean quantum gravity to ADM formalism.
• Claus Kiefer, Quantum Gravity. Oxford University Press, second ed.
• Emil Mottola, "Functional Integration Over Geometries." J.Math.Phys. 36 (1995) 2470–2511. Eprint arXiv:hep-th/9502109. |
84edbe423a39939f |
PowerPedia:Quantum mechanics
From PESWiki
(Redirected from Quantum mechanics)
Jump to: navigation, search
Quantum mechanics is a first quantized quantum theory that supersedes classical mechanics at the atomic and subatomic levels. It is a fundamental branch of physics that provides the underlying mathematical framework for many fields of physics and chemistry, including condensed matter physics, atomic physics, molecular physics, computational chemistry, quantum chemistry, particle physics, and nuclear physics. Quantum mechanics is sometimes used in a more general sense, to mean quantum physics.
Terminology and theories
The term quantum (Latin, "how much") refers to discrete units that the theory assigns to certain physical quantities, such as the energy of an atom at rest (see Figure 1, at right). The discovery that waves could be measured in particle-like small packets of energy called quanta led to the branch of physics that deals with atomic and subatomic systems which we today call Quantum Mechanics. The foundations of quantum mechanics were established during the first half of the 20th century by Max Planck, Albert Einstein, Niels Bohr, Louis de Broglie, Werner Heisenberg, Erwin Schrödinger, Max Born, John von Neumann, Paul Dirac, Wolfgang Pauli and others. Some fundamental aspects of the theory are still actively studied.
A quantum theory is a theory of physics that uses Planck's constant. In contrast to classical physics, where variables are often continuous, many of the variables in a quantum theory take on discrete values. The quantum was concept that grew out of the realisation that electromagnetic radiation came in discrete packets, called quanta. The process of converting a classical theory into a quantum theory is called quantisation and is divided into stages: first quantisation, second quantisation, etc depending on the extent to which the theory is quantised. Quantum physics is the set of quantum theories. There are several of them:
• quantum mechanics -- a first quantised or semi-classical theory in which particle properties are quantised, but not particle numbers, fields and fundamental interactions.
• quantum field theory or QFT -- a second or canonically quantized theory in which all aspects of particles, fields and interactions are quantised, with the exception of gravitation. Quantum electrodynamics, quantum chromodynamics and electroweak theory are examples of relativistic fundamental QFTs which taken together form the Standard Model. Solid state physics is a non-fundamental QFT.
• quantum gravity -- a third quantised theory in which general relativity (i.e. gravity) is also quantised. This theory is incomplete, and is hoped to be finalised within the framework of a theory of everything, such as string theory or M-theory.
History of Quantum mechanics
In 1900, the German physicist Max Planck introduced the idea that energy is quantized, in order to derive a formula for the observed frequency dependence of the energy emitted by a black body. In 1905, Einstein explained the photoelectric effect by postulating that light energy comes in quanta called photons. The idea that each photon had to consist of energy in terms of quanta was a remarkable achievement as it effectively removed the possibility of black body radiation attaining infinite energy if it were to be explained in terms of wave forms only. In 1913, Bohr explained the spectral lines of the hydrogen atom, again by using quantization, in his paper of July 1913 On the Constitution of Atoms and Molecules. In 1924, the French physicist Louis de Broglie put forward his theory of matter waves by stating that particles can exhibit wave characteristics and vice versa.
These theories, though successful, were strictly phenomenological: there was no rigorous justification for quantization (aside, perhaps, for Henri Poincaré's discussion of Planck's theory in his 1912 paper Sur la théorie des quanta). They are collectively known as the old quantum theory. The phrase "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics. Modern quantum mechanics was born in 1925, when the German physicist Heisenberg developed matrix mechanics and the Austrian physicist Schrödinger invented wave mechanics and the non-relativistic Schrödinger equation. Schrödinger subsequently showed that the two approaches were equivalent.
Heisenberg formulated his uncertainty principle in 1927, and the Copenhagen interpretation took shape at about the same time. Starting around 1927, Paul Dirac began the process of unifying quantum mechanics with special relativity by discovering the Dirac equation for the electron. He also pioneered the use of operator theory, including the influential bra-ket notation, as described in his famous 1930 textbook. During the same period, Hungarian polymath John von Neumann formulated the rigorous mathematical basis for quantum mechanics as the theory of linear operators on Hilbert spaces, as described in his likewise famous 1932 textbook. These, like many other works from the founding period still stand, and remain widely used. The field of quantum chemistry was pioneered by physicists Walter Heitler and Fritz London, who published a study of the covalent bond of the hydrogen molecule in 1927. Quantum chemistry was subsequently developed by a large number of workers, including the American theoretical chemist Linus Pauling at Cal Tech, and John Slater into various theories such as Molecular Orbital Theory or Valence Theory.
Beginning in 1927, attempts were made to apply quantum mechanics to fields rather than single particles, resulting in what are known as quantum field theories. Early workers in this area included Dirac, Pauli, Weisskopf, and Jordan. This area of research culminated in the formulation of quantum electrodynamics by Feynman, Dyson, Schwinger, and Tomonaga during the 1940s. Quantum electrodynamics is a quantum theory of electrons, positrons, and the electromagnetic field, and served as a role model for subsequent quantum field theories.
The theory of quantum chromodynamics was formulated beginning in the early 1960s. The theory as we know it today was formulated by Politzer, Gross and Wilzcek in 1975. Building on pioneering work by Schwinger, Higgs, Goldstone, Glashow, Weinberg and Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force.
Quantum mechanics formalism
Quantum mechanics is a more fundamental theory than Newtonian mechanics and classical electromagnetism, in the sense that it provides accurate and precise descriptions for many phenomena that these "classical" theories simply cannot explain on the atomic and subatomic level. It is necessary to use quantum mechanics to understand the behavior of systems at atomic length scales and smaller. For example, if Newtonian mechanics governed the workings of an atom, electrons would rapidly travel towards and collide with the nucleus. However, in the natural world the electron normally remains in a stable orbit around a nucleus -- seemingly defying classical electromagnetism. Quantum mechanics was initially developed to explain the atom, especially the spectra of light emitted by different atomic species. The quantum theory of the atom developed as an explanation for the electron's staying in its orbital, which could not be explained by Newton's laws of motion and by classical electromagnetism.
In the formalism of quantum mechanics, the state of a system at a given time is described by a complex number wave functions (sometimes referred to as orbitals in the case of atomic electrons), and more generally, elements of a complex vector space. This abstract mathematical object allows for the calculation of probabilities of outcomes of concrete experiments. For example, it allows to compute the probability of finding an electron in a particular region around the nucleus at a particular time. Contrary to classical mechanics, one cannot in general make predictions of arbitrary accuracy. For instance electrons cannot in general be pictured as localized particles in space but rather should be thought of as "clouds" of negative charge spread out over the entire orbit. These clouds represent the regions around the nucleus where the probability of "finding" an electron is the largest. Heisenberg's Uncertainty Principle quantifies the inability to precisely locate the particle. The other exemplar that led to quantum mechanics was the study of electromagnetic waves such as light. When it was found in 1900 by Max Planck that the energy of waves could be described as consisting of small packets or quanta, Albert Einstein exploited this idea to show that an electromagnetic wave such as light could be described by a particle called the photon with a discrete energy dependent on its frequency. This led to a theory of unity between subatomic particles and electromagnetic waves called wave-particle duality in which particles and waves were neither one nor the other, but had certain properties of both. While quantum mechanics describes the world of the very small, it also is needed to explain certain "macroscopic quantum systems" such as superconductors and superfluids.
Broadly speaking, quantum mechanics incorporates four classes of phenomena that classical physics cannot account for: (i) the quantization (discretization) of certain physical quantities, (ii) wave-particle duality, (iii) the uncertainty principle, and (iv) quantum entanglement. Each of these phenomena will be described in greater detail in subsequent sections. Since the early days of quantum theory, physicists have made many attempts to combine it with the other highly successful theory of the twentieth century, Albert Einstein's General Theory of Relativity. While quantum mechanics is entirely consistent with special relativity, serious problems emerge when one tries to join the quantum laws with general relativity, a more elaborate description of spacetime which incorporates gravity. Resolving these inconsistencies has been a major goal of twentieth- and twenty-first-century physics. Despite the proposal of many novel ideas, the unification of quantum mechanics—which reigns in the domain of the very small—and general relativity—a superb description of the very large—remains a tantalizing future possibility. (See quantum gravity, string theory.) Because everything is composed of quantum-mechanical particles, the laws of classical physics must approximate the laws of quantum mechanics in the appropriate limit. This is often expressed by saying that in case of large quantum numbers quantum mechanics "reduces" to classical mechanics and classical electromagnetism. This requirement is called the correspondence, or classical limit.
There are numerous mathematically equivalent formulations of quantum mechanics. One of the oldest and most commonly used formulations is the transformation theory invented by Cambridge theoretical physicist Paul Dirac, which unifies and generalizes the two earliest formulations of quantum mechanics, matrix mechanics (invented by Werner Heisenberg)<ref> Especially since Werner Heisenberg was awarded the Nobel Prize in Physics in 1932 for the creation of quantum mechanics, the role of Max Born has been obfuscated. A 2005 biography of Born details his role as the creator of the matrix formulation of quantum mechanics. This was recognized in a paper by Heisenberg, in 1950, honoring Max Planck. See: Nancy Thorndike Greenspan, “The End of the Certain World: The Life and Science of Max Born (Basic Books, 2005), pp. 124 - 128, and 285 - 286. </ref> and wave mechanics (invented by Erwin Schrödinger). In this formulation, the instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables". Examples of observables include energy, position, momentum, and angular momentum. Observables can be either continuous (e.g., the position of a particle) or discrete (e.g., the energy of an electron bound to a hydrogen atom).
Generally, quantum mechanics does not assign definite values to observables. Instead, it makes predictions about probability distributions; that is, the probability of obtaining each of the possible outcomes from measuring an observable. Naturally, these probabilities will depend on the quantum state at the instant of the measurement. There are, however, certain states that are associated with a definite value of a particular observable. These are known as "eigenstates" of the observable ("eigen" meaning "own" in German). In the everyday world, it is natural and intuitive to think of everything being in an eigenstate of every observable. Everything appears to have a definite position, a definite momentum, and a definite time of occurrence. However, Quantum Mechanics does not pinpoint the exact values for the position or momentum of a certain particle in a given space in a finite time, but, rather, it only provides a range of probabilities of where that particle might be. Therefore, it became necessary to use different words for a) the state of something having an uncertainty relation and b) a state that has a definite value. The latter is called the "eigenstate" of the property being measured.
A concrete example will be useful here. Let us consider a free particle. In quantum mechanics, there is wave-particle duality so the properties of the particle can be described as a wave. Therefore, its quantum state can be represented as a wave, of arbitrary shape and extending over all of space, called a wavefunction. The position and momentum of the particle are observables. The Uncertainty Principle of quantum mechanics states that both the position and the momentum cannot simultaneously be known with infinite precision at the same time. However, we can measure just the position alone of a moving free particle creating an eigenstate of position with a wavefunction that is very large at a particular position x, and zero everywhere else. If we perform a position measurement on such a wavefunction, we will obtain the result x with 100% probability. In other words, we will know the position of the free particle. This is called an eigenstate of position. If the particle is in an eigenstate of position then its momentum is completely unknown. An eigenstate of momentum, on the other hand, has the form of a plane wave. It can be shown that the wavelength is equal to h/p, where h is Planck's constant and p is the momentum of the eigenstate. If the particle is in an eigenstate of momentum then its position is completely blurred out.
Usually, a system will not be in an eigenstate of whatever observable we are interested in. However, if we measure the observable, the wavefunction will immediately become an eigenstate of that observable. This process is known as wavefunction collapse. If we know the wavefunction at the instant before the measurement, we will be able to compute the probability of collapsing into each of the possible eigenstates. For example, the free particle in our previous example will usually have a wavefunction that is a wave packet centered around some mean position x0, neither an eigenstate of position nor of momentum. When we measure the position of the particle, it is impossible for us to predict with certainty the result that we will obtain. It is probable, but not certain, that it will be near x0, where the amplitude of the wavefunction is large. After we perform the measurement, obtaining some result x, the wavefunction collapses into a position eigenstate centered at x.
As mentioned in the introduction, there are several classes of phenomena that appear under quantum mechanics which have no analogue in classical physics. These are sometimes referred to as "quantum effects". The first type of quantum effect is the quantization of certain physical quantities. Quantization first arose in the mathematical formulae of Max Planck in 1900 as discussed in the introduction. Max Planck was analyzing how the radiation emitted from a body was related to its temperature, in other words, he was analyzing the energy of a wave. The energy of a wave could not be infinite, so Planck used the property of the wave we designate as the frequency to define energy. Max Planck discovered a constant that when multiplied by the frequency of any wave gives the energy of the wave. This constant is referred to by the letter h in mathematical formulae. It is a cornerstone of physics. By measuring the energy in a discrete non-continuous portion of the wave, the wave took on the appearance of chunks or packets of energy. These chunks of energy resembled particles. So energy is said to be quantized because it only comes in discrete chunks instead of a continuous range of energies.
In the example we have given, of a free particle in empty space, both the position and the momentum are continuous observables. However, if we restrict the particle to a region of space (the so-called "particle in a box" problem), the momentum observable will become discrete; it will only take on the values n \frac{h}{2 L}, where L is the length of the box, h is Planck's constant, and n is an arbitrary nonnegative integer number. Such observables are said to be quantized, and they play an important role in many physical systems. Examples of quantized observables include angular momentum, the total energy of a bound system, and the energy contained in an electromagnetic wave of a given frequency. Another quantum effect is the uncertainty principle, which is the phenomenon that consecutive measurements of two or more observables may possess a fundamental limitation on accuracy. In our free particle example, it turns out that it is impossible to find a wavefunction that is an eigenstate of both position and momentum. This implies that position and momentum can never be simultaneously measured with arbitrary precision, even in principle: as the precision of the position measurement improves, the maximum precision of the momentum measurement decreases, and vice versa. Those variables for which it holds (e.g., momentum and position, or energy and time) are canonically conjugate variables in classical physics.
Another quantum effect is the wave-particle duality. It has been shown that, under certain experimental conditions, microscopic objects like atoms or electrons exhibit particle-like behavior, such as scattering. ("Particle-like" in the sense of an object that can be localized to a particular region of space.) Under other conditions, the same type of objects exhibit wave-like behavior, such as interference. We can observe only one type of property at a time, never both at the same time. Another quantum effect is quantum entanglement. In some cases, the wave function of a system composed of many particles cannot be separated into independent wave functions, one for each particle. In that case, the particles are said to be "entangled". If quantum mechanics is correct, entangled particles can display remarkable and counter-intuitive properties. For example, a measurement made on one particle can produce, through the collapse of the total wavefunction, an instantaneous effect on other particles with which it is entangled, even if they are far apart. (This does not conflict with special relativity because information cannot be transmitted in this way.)
Mathematical formulation
In the mathematically rigorous formulation of quantum mechanics, developed by Paul Dirac and John von Neumann, the possible states of a quantum mechanical system are represented by unit vectors (called "state vectors") residing in a complex separable Hilbert space (variously called the "state space" or the "associated Hilbert space" of the system) well defined upto a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space. The exact nature of this Hilbert space is dependent on the system; for example, the state space for position and momentum states is the space of square-integrable functions, while the state space for the spin of a single proton is just the product of two complex planes. Each observable is represented by a densely defined Hermitian (or self-adjoint) linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. If the operator's spectrum is discrete, the observable can only attain those discrete eigenvalues.
Interactions with other scientific theories
Quantum mechanics has had enormous success in explaining many of the features of our world. The individual behaviour of the subatomic particles that make up all forms of matter - electrons, protons, neutrons, photons and so forth - can often only be satisfactorily described using quantum mechanics. Quantum mechanics has strongly influenced string theory, a candidate for a theory of everything (see Reductionism). It is also related to statistical mechanics. Quantum mechanics is important for understanding how individual atoms combine covalently to form chemicals or molecules. The application of quantum mechanics to chemistry is known as quantum chemistry. (Relativistic) quantum mechanics can in principle mathematically describe most of chemistry. Quantum mechanics can provide quantitative insight into ionic and covalent bonding processes by explicitly showing which molecules are energetically favorable to which others, and by approximately how much. Most of the calculations performed in computational chemistry rely on quantum mechanics. Much of modern technology operates at a scale where quantum effects are significant. Examples include the laser, the transistor, the electron microscope, and magnetic resonance imaging. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics. Researchers are currently seeking robust methods of directly manipulating quantum states. Efforts are being made to develop quantum cryptography, which will allow guaranteed secure transmission of information. A more distant goal is the development of quantum computers, which are expected to perform certain computational tasks exponentially faster than classical computers. Another active research topic is quantum teleportation, which deals with techniques to transmit quantum states over arbitrary distances.
Philosophical consequences
An interpretation of quantum mechanics is an attempt to answer the question, What exactly is quantum mechanics talking about? The question has its historical roots in the nature of quantum mechanics itself which was considered as a radical departure from previous physical theories. However, quantum mechanics has been described as "the most precisely tested and most successful theory in the history of science" (c.f. Jackiw and Kleppner, 2000.)
An interpretation can be characterized by whether it satisfies certain properties, such as:
• Realism
• Completeness
• Local realism
• Determinism
To explain these properties, we need to be more explicit about the kind of picture an interpretation provides. To that end we will regard an interpretation as a correspondence between the elements of the mathematical formalism M and the elements of an interpreting structure I, where:
• The mathematical formalism consists of the Hilbert space machinery of ket-vectors, self-adjoint operators acting on the space of ket-vectors, unitary time dependence of ket-vectors and measurement operations. In this context a measurement operation can be regarded as a transformation which carries a ket-vector into a probability distribution on ket-vectors. See also quantum operations for a formalization of this concept.
• The interpreting structure includes states, transitions between states, measurement operations and possibly information about spatial extension of these elements. A measurement operation here refers to an operation which returns a value and results in a possible system state change. Spatial information, for instance would be exhibited by states represented as functions on configuration space. The transitions may be non-deterministic or probabilistic or there may be infinitely many states. However, the critical assumption of an interpretation is that the elements of I are regarded as physically real.
In this sense, an interpretation can be regarded as a semantics for the mathematical formalism.
In particular, the bare instrumentalist view of quantum mechanics outlined in the previous section is not an interpretation at all since it makes no claims about elements of physical reality. The current use in physics of "completeness" and "realism" is often considered to have originated in the paper (Einstein et al., 1935) which proposed the EPR paradox. In that paper the authors proposed the concept "element of reality" and "completeness" of a physical theory. Though they did not define "element of reality", they did provide a sufficient characterization for it, namely a quantity whose value can be predicted with certainty before measuring it or disturbing it in any way. EPR define a "complete physical theory" as one in which every element of physical reality is accounted for by the theory. In the semantic view of interpretation, an interpretation of a theory is complete if every element of the interpreting structure is accounted for by the mathematical formalism. Realism is a property of each one of the elements of the mathematical formalism; any such element is real if it corresponds to something in the interpreting structure. For instance, in some interpretations of quantum mechanics (such as the many-worlds interpretation) the ket vector associated to the system state is assumed to correspond to an element of physical reality, while in others it does not.
Determinism is a property characterizing state changes due to the passage of time, namely that the state at an instant of time in the future is a function of the state at the present (see time evolution). It may not always be clear whether a particular interpreting structure is deterministic or not, precisely because there may not be a clear choice for a time parameter. Moreover, a given theory may have two interpretations, one of which is deterministic, and the other not.
Local realism has two parts:
• The value returned by a measurement corresponds to the value of some function on the state space. Stated in another way, this value is an element of reality;
• The effects of measurement have a propagation speed not exceeding some universal bound (e.g., the speed of light). In order for this to make sense, measurement operations must be spatially localized in the interpreting structure.
A precise formulation of local realism in terms of a local hidden variable theory was proposed by John Bell. Bell's theorem and its experimental verification restrict the kinds of properties a quantum theory can have. For instance, Bell's theorem implies quantum mechanics cannot satisfy local realism.
Albert Einstein, himself one of the founders of quantum theory, disliked this loss of determinism in measurement. He held that there should be a local hidden variable theory underlying quantum mechanics and consequently the present theory was incomplete. He produced a series of objections to the theory, the most famous of which has become known as the EPR paradox. John Bell showed that the EPR paradox led to experimentally testable differences between quantum mechanics and local hidden variable theories. Experiments have been taken as confirming that quantum mechanics is correct and the real world cannot be described in terms of such hidden variables. "Loopholes" in the experiments, however, mean that the question is still not quite settled.
Consistent histories
The consistent histories generalizes the conventional Copenhagen interpretation and attempts to provide a natural interpretation of quantum cosmology. The theory is based on a consistency criterion that then allows the history of a system to be described so that the probabilities for each history obey the additive rules of classical probability while being consistent with the Schrödinger equation. According to this interpretation, the purpose of a quantum-mechanical theory is to predict probabilities of various alternative histories.
Many worlds
The many-worlds interpretation (or MWI) is an interpretation of quantum mechanics that rejects the non-deterministic and irreversible wavefunction collapse associated with measurement in the Copenhagen interpretation in favor of a description in terms of quantum entanglement and reversible time evolution of states. The phenomena associated with measurement are explained by decoherence which occurs when states interact with the environment. As result of the decoherence the world-lines of macroscopic objects repeatedly split into mutally unobservable, branching histories -- distinct universes within a greater multiverse.
The Copenhagen Interpretation
The Copenhagen interpretation is an interpretation of quantum mechanics formulated by Niels Bohr and Werner Heisenberg while collaborating in Copenhagen around 1927. Bohr and Heisenberg extended the probabilistic interpretation of the wavefunction, proposed by Max Born. The Copenhagen interpretation rejects questions like "where was the particle before I measured its position" as meaningless. The act of measurement causes an instantaneous "collapse of the wave function". This means that the measurement process randomly picks out exactly one of the many possibilities allowed for by the state's wave function, and the wave function instantaneously changes to reflect that pick.
Quantum Logic
Quantum logic can be regarded as a kind of propositional logic suitable for understanding the apparent anomalies regarding quantum measurement, most notably those concerning composition of measurement operations of complementary variables. This research area and its name originated in the 1936 paper by Garrett Birkhoff and John von Neumann, who attempted to reconcile some of the apparent inconsistencies of classical boolean logic with the facts related to measurement and observation in quantum mechanics.
The Bohm interpretation
The Bohm interpretation of quantum mechanics is an interpretation postulated by David Bohm in which the existence of a non-local universal wavefunction allows distant particles to interact instantaneously. The interpretation generalizes Louis de Broglie's pilot wave theory from 1927, which posits that both wave and particle are real. The wave function 'guides' the motion of the particle, and evolves according to the Schrödinger equation. The interpretation assumes a single, nonsplitting universe (unlike the Everett many-worlds interpretation) and is deterministic (unlike the Copenhagen interpretation). It says the state of the universe evolves smoothly through time, without the collapsing of wavefunctions when a measurement occurs, as in the Copenhagen interpretation. However, it does this by assuming a number of hidden variables, namely the positions of all the particles in the universe, which, like probability amplitudes in other interpretations, can never be measured directly.
Transactional interpretation
The transactional interpretation of quantum mechanics (TIQM) by John Cramer is an unusual interpretation of quantum mechanics that describes quantum interactions in terms of a standing wave formed by retarded (forward-in-time) and advanced (backward-in-time) waves. The author argues that it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and resolves various quantum paradoxes.
Consciousness causes collapse
Consciousness causes collapse is the speculative theory that observation by a conscious observer is responsible for the wavefunction collapse. It is an attempt to solve the Wigner's friend paradox by simply stating that collapse occurs at the first "conscious" observer. Supporters claim this is not a revival of substance dualism, since (in a ramification of this view) consciousness and objects are entangled and cannot be considered as distinct. The consciousness causes collapse theory can be considered as a speculative appendage to almost any interpretation of quantum mechanics and most physicists reject it as unverifiable and introducing unnecessary elements into physics.
Relational Quantum Mechanics
The essential idea behind Relational Quantum Mechanics, following the precedent of Special Relativity, is that different observers may give different accounts of the same series of events: for example, to one observer at a given point in time, a system may be in a single, "collapsed" eigenstate, while to another observer at the same time, it may be in a superposition of two or more states. Consequently, if quantum mechanics is to be a complete theory, Relational Quantum Mechanics argues that the notion of "state" describes not the observed system itself, but the relationship, or correlation, between the system and its observer(s). The state vector of conventional quantum mechanics becomes a description of the correlation of some degrees of freedom in the observer, with respect to the observed system. However, it is held by Relational Quantum Mechanics that this applies to all physical objects, whether or not they are conscious or macroscopic. Any "measurement event" is seen simply as an ordinary physical interaction, an establishment of the sort of correlation discussed above. Thus the physical content of the theory is to do not with objects themselves, but the relations between them [1]. For more information, see Rovelli (1996).
Modal Interpretations of Quantum Theory
Modal interpretations of Quantum mechanics were first conceived of in 1972 by B. van Fraassen, in his paper “A formal approach to the philosophy of science". However, this term now is used to describe a larger set of models that grew out of this approach. The Stanford Encyclopedia of Philosophy describes several versions:
• The Copenhagen Variant
• Kochen-Dieks-Healey Interpretations
• Motivating Early Modal Interpretations, based on the work of R. Clifton, M. Dickson and J. Bub.
Explanation of quantum mechanics enigmas
The Spacetime Model can be considered as the continuation of Eintein's works. Instead of limiting spacetime to General Relativity (mass and gravity), the author has extended it to all elements of the universe. The result is that spacetime explains with logic and consistency 53 enigmas of Quantum Mechanics: wave-particle duality, elementary particles (quarks and leptons), mass, gravity, charge, antimatter, Standard Model.... It unifies the three basic forces (electroweak, stong nuclear and gravity) in two generic forces. Contrary to other theories, the Spacetime Model requires only four dimensions: x, y, z and t. To get additional information concerning this new theory, please download the free 220 pages document at
Founding experiments
Research and Development
External articles and references
• P. A. M. Dirac, The Principles of Quantum Mechanics (1930) -- the beginning chapters provide a very clear and comprehensible introduction
• David Griffiths, Introduction to Quantum Mechanics, Prentice Hall, 1995. ISBN 0-13-111892-7
• Richard P. Feynman, Robert B. Leighton and Matthew Sands (1965). The Feynman Lectures on Physics, Addison-Wesley. Richard Feynman's original lectures (given at CALTECH in early 1962) can also be downloaded as an MP3 file from[1]
• Hugh Everett, Relative State Formulation of Quantum Mechanics, Reviews of Modern Physics vol 29, (1957) pp 454-462.
• Bryce DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X
• Richard P. Feynman, QED: The Strange Theory of Light and Matter -- a popular science book about quantum mechanics and quantum field theory that contains many enlightening insights that are interesting for the expert as well
• Marvin Chester, Primer of Quantum Mechanics, 1987, John Wiley, N.Y. ISBN 0-486-42878-8
• Hagen Kleinert, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3th edition, World Scientific (Singapore, 2004)(also available online here)
• J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1955.
• H. Weyl, The Theory of Groups and Quantum Mechanics, Dover Publications 1950.
• Max Jammer, "The Conceptual Development of Quantum Mechanics" (McGraw Hill Book Co., 1966)
• Gunther Ludwig, "Wave Mechanics" (Pergamon Press, 1968) ISBN 08-203204-1
Course material
Personal tools
Sponsored Links |
a1e313ed21beb000 | From Wikipedia, the free encyclopedia
(Redirected from Electrons)
Jump to: navigation, search
For other uses, see Electron (disambiguation).
A glass tube containing a glowing green electron beam
Experiments with a Crookes tube first demonstrated the particle nature of electrons. In this illustration, the profile of the Maltese-cross-shaped target is projected against the tube face at right by a beam of electrons.[1]
Composition Elementary particle[2]
Statistics Fermionic
Generation First
Interactions Gravity, Electromagnetic, Weak
Symbol e, β
Antiparticle Positron (also called antielectron)
Theorized Richard Laming (1838–1851),[3]
Discovered J. J. Thomson (1897)[6]
5.4857990946(22)×10−4 u[7]
0.510998928(11) MeV/c2[7]
Electric charge −1 e[note 2]
−4.80320451(10)×10−10 esu
Magnetic moment −1.00115965218076(27) μB[7]
Spin 12
Theoretical estimates of the electron position probability density for orbital of the hydrogen atom.
The electron is a subatomic particle, symbol e or β, with a negative elementary electric charge.[8] Electrons belong to the first generation of the lepton particle family,[9] and are generally thought to be elementary particles because they have no known components or substructure.[2] The electron has a mass that is approximately 1/1836 that of the proton.[10] Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value in units of ħ, which means that it is a fermion. Being fermions, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[9] Like all matter, electrons have properties of both particles and waves, and so can collide with other particles and can be diffracted like light. The wave properties of electrons is easier to observe with experiments than that of other particles like neutrons and protons because electrons have a lower mass and hence a higher De Broglie wavelength for typical energies.
Interactions involving electrons and other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between positive protons inside atomic nuclei and negative electrons composes atoms. Ionization or changes in the proportions of particles changes the binding energy of the system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[12] British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms in 1838;[4] Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897.[6][13][14] Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons may be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles may be totally annihilated, producing gamma ray photons.
The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. [15] In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, to refer to this property of attracting small objects after being rubbed. [16] Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word for amber, ἤλεκτρον (ēlektron).
In 1891 Stoney coined the term electron to describe these elementary charges, writing later in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron".[21] The word electron is a combination of the words electr(ic) and (i)on.[22] The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.[23][24]
A round glass vacuum tube with a glowing circular beam inside
The German physicist Johann Wilhelm Hittorf studied electrical conductivity in rarefied gases: in 1869, he discovered a glow emitted from the cathode that increased in size with decrease in gas pressure. In 1876, the German physicist Eugen Goldstein showed that the rays from this glow cast a shadow, and he dubbed the rays cathode rays.[26] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[27] He then showed that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[28][29] In 1879, he proposed that these properties could be explained by what he termed 'radiant matter'. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode.[30]
In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.[32]
In 1896, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson,[13] performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[6] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[6][14] He showed that their charge to mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[6][33] The name electron was again proposed for these particles by the Irish physicist George F. Fitzgerald, and the name has since gained universal acceptance.[28]
Robert Millikan
While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter.[34] In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays.[35] This evidence strengthened the view that electrons existed as components of atoms.[36][37]
Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.[40]
Atomic theory[edit]
By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons.[41] In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with the energy determined by the angular momentum of the electron's orbits about the nucleus. The electrons could move between these states, or orbits, by the emission or absorption of photons at specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom.[42] However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.[41]
Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[43] Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[44] In 1919, the American chemist Irving Langmuir elaborated on the Lewis' static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness".[45] The shells were, in turn, divided by him in a number of cells each containing one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[44] which were known to largely repeat themselves according to the periodic law.[46]
Quantum mechanics[edit]
In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter possesses a de Broglie wave similar to light.[50] That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment.[51] Wave-like nature is observed, for example, when a beam of light is passed through parallel slits and creates interference patterns. In 1927, the interference effect was found in a beam of electrons by English physicist George Paget Thomson with a thin metal film and by American physicists Clinton Davisson and Lester Germer using a crystal of nickel.[52]
A symmetrical blue cloud that decreases in intensity from the center outward
De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated.[53] Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first being by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum.[54] Once spin and the interaction between multiple electrons were considered, quantum mechanics later made it possible to predict the configuration of electrons in atoms with higher atomic numbers than hydrogen.[55]
In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[56] To resolve some problems within his relativistic equation, in 1930 Dirac developed a model of the vacuum as an infinite sea of particles having negative energy, which was dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[57] This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons, and using electron as a generic term to describe both the positively and negatively charged variants.
In 1947 Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of hydrogen atom, which should have the same energy, were shifted in relation to each other, the difference being the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s.[58]
Particle accelerators[edit]
With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[61] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[62] The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[63][64]
Confinement of individual electrons[edit]
Individual electrons can now be easily confined in ultra small (L=20 nm, W=20 nm) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K).[65] The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor.
Fundamental properties[edit]
The invariant mass of an electron is approximately 9.109×10−31 kilograms,[68] or 5.489×10−4 atomic mass units. On the basis of Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[10][69] Astronomical measurements show that the proton-to-electron mass ratio has held the same value for at least half the age of the universe, as is predicted by the Standard Model.[70]
Electrons have an electric charge of −1.602×10−19 coulomb,[68] which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. This elementary charge has a relative standard uncertainty of 2.2×10−8.[68] Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[71] As the symbol e is used for the elementary charge, the electron is commonly symbolized by e, where the minus sign indicates the negative charge. The positron is symbolized by e+ because it has the same properties as the electron but with a positive rather than negative charge.[67][68]
The electron has an intrinsic angular momentum or spin of 12.[68] This property is usually stated by referring to the electron as a spin-12 particle.[67] For such particles the spin magnitude is 32 ħ.[note 3] while the result of the measurement of a projection of the spin on any axis can only be ±ħ2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[68] It is approximately equal to one Bohr magneton,[72][note 4] which is a physical constant equal to 9.27400915(23)×10−24 joules per tesla.[68] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[73]
The electron has no known substructure.[2][74] and it is assumed to be a point particle with a point charge and no spatial extent.[9] In classical physics, the angular momentum and magnetic moment of an object depend upon its physical dimensions. Hence, the concept of a dimensionless electron possessing these properties might seem paradoxical and inconsistent to experimental observations in Penning traps which point to finite non-zero radius of the electron. A possible explanation of this paradoxical situation is given below in Virtual particles subsection by taking into consideration the Foldy-Wouthuysen transformation. The issue of the radius of the electron is a challenging problem of the modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity.[75] These aspects have been analyzed in detail by Dmitri Ivanenko and Arseny Sokolov.
Observation of a single electron in a Penning trap shows the upper limit of the particle's radius is 10−22 meters.[76] There is a physical constant called the "classical electron radius", with the much larger value of 2.8179×10−15 m, greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[77][note 5]
Quantum properties[edit]
Virtual particles[edit]
Main article: Virtual particle
While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[84][85] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[86] Virtual particles cause a comparable shielding effect for the mass of the electron.[87]
The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[72][88] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[89]
The apparent paradox (mentioned above in the properties subsection) of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung),[90] which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron.[9][91] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[84]
An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force is determined by Coulomb's inverse square law.[92] When an electron is in motion, it generates a magnetic field.[81]:140 The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor.[93] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).
A graph with arcs showing the motion of charged particles
When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation.[81]:160[94][note 6] The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.[95]
Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force.[96] Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation.[97]
An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift.[note 7] The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength.[98] For an electron, it has a value of 2.43×10−12 m.[68] When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or Linear Thomson scattering.[99]
When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV.[100][101] On the other hand, high-energy photons may transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.[102][103]
Atoms and molecules[edit]
Main article: Atom
Probability densities for the first few hydrogen atom orbitals, seen in cross-section. The energy level of a bound electron determines the orbital it occupies, and the color reflects the probability of finding the electron at a given position.
Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[105] Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[106] To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.[107]
The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[109] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[12] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[110] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. On the contrary, in non-bonded pairs electrons are distributed in a large volume around nuclei.[111]
Four bolts of lightning strike the ground
A lightning discharge consists primarily of a flow of electrons.[112] The electric potential needed for lightning may be generated by a triboelectric effect.[113][114]
Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin and magnetic moment as real electrons but may have a different mass.[116] When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.[117]
Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law,[119] which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electrical current.[122]
When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electrical current, in a process known as superconductivity. In BCS theory, this behavior is modeled by pairs of electrons entering a quantum state known as a Bose–Einstein condensate. These Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance.[123] (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.)[124] However, the mechanism by which higher temperature superconductors operate remains uncertain.
Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, Orbitons and holons.[125][126] The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge.
Motion and energy[edit]
The plot starts at zero and curves sharply upward toward the right
where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.[128] Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum.[50] For the 51 GeV electron above, the wavelength is about 2.4×10−17 m, small enough to explore structures well below the size of an atomic nucleus.[129]
γ + γe+ + e
For reasons that remain uncertain, during the process of leptogenesis there was an excess in the number of electrons over positrons.[132] Hence, about one electron in every billion survived the annihilation process. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[133][134] The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[135] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,
np + e + ν
For about the next 300000400000 years, the excess electrons remained too energetic to bind with atomic nuclei.[136] What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.[137]
Roughly one million years after the big bang, the first generation of stars began to form.[137] Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus.[138] An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (60Ni).[139]
A branching tree representing the particle production
At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole.[140] According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.
When pairs of virtual particles (such as an electron and positron) are created in the vicinity of the event horizon, the random spatial distribution of these particles may permit one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space.[141] In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.[142]
Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×1020 eV have been recorded.[143] When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions.[144] More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion.
πμ + ν
A muon, in turn, can decay to form an electron or positron.[145]
μe + ν
+ ν
Aurorae are mostly caused by energetic electrons precipitating into the atmosphere.[146]
Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.[147]
The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct absorption lines appear in the spectrum of transmitted radiation. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. Spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.[148][149]
In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge.[107] The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months.[150] The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.[151]
The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.[154]
Plasma applications[edit]
Particle beams[edit]
Electron beams are used in welding.[156] They allow energy densities up to 107 W·cm−2 across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[157][158]
Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer.[159] This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.[160]
Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[161] Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.[162]
Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.[163][164]
Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect.[note 8] Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies .[165]
Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV.[166] The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.[167][168]
The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material.[169] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[170] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[171] The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[172] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.
Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.[173][174][175]
Other applications[edit]
In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices may find manufacturing, communication and various medical applications, such as soft tissue surgery.[176]
Electrons are important in cathode ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets.[177] In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse.[178] Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.[179]
See also[edit]
2. ^ The electron's charge is the negative of elementary charge, which has a positive value for the proton.
for quantum number s = 12.
4. ^ Bohr magneton:
2. ^ a b c Eichten, E.J.; Peskin, M.E.; Peskin, M. (1983). "New Tests for Quark and Lepton Substructure". Physical Review Letters 50 (11): 811–814. Bibcode:1983PhRvL..50..811E. doi:10.1103/PhysRevLett.50.811.
6. ^ a b c d e f Thomson, J.J. (1897). "Cathode Rays". Philosophical Magazine 44 (269): 293. doi:10.1080/14786449708621070.
7. ^ a b c d e P.J. Mohr, B.N. Taylor, and D.B. Newell (2011), "The 2010 CODATA Recommended Values of the Fundamental Physical Constants" (Web Version 6.0). This database was developed by J. Baker, M. Douma, and S. Kotochigova. Available: http://physics.nist.gov/constants [Thursday, 02-Jun-2011 21:00:12 EDT]. National Institute of Standards and Technology, Gaithersburg, MD 20899.
13. ^ a b c Dahl (1997:122–185).
17. ^ Keithley, J.F. (1999). The Story of Electrical and Magnetic Measurements: From 500 B.C. to the 1940s. IEEE Press. pp. 15, 20. ISBN 0-7803-1193-0.
20. ^ Barrow, J.D. (1983). "Natural Units Before Planck". Quarterly Journal of the Royal Astronomical Society 24: 24–26. Bibcode:1983QJRAS..24...24B.
21. ^ Stoney, G.J. (1894). "Of the "Electron," or Atom of Electricity". Philosophical Magazine 38 (5): 418–420. doi:10.1080/14786449408620653.
22. ^ "electron, n.2". OED Online. March 2013. Oxford University Press. Accessed 12 April 2013 [1]
24. ^ Guralnik, D.B. ed. (1970). Webster's New World Dictionary. Prentice Hall. p. 450.
26. ^ Dahl (1997:55–58).
28. ^ a b c Leicester, H.M. (1971). The Historical Background of Chemistry. Courier Dover. pp. 221–222. ISBN 0-486-61053-5.
29. ^ Dahl (1997:64–78).
30. ^ Zeeman, P.; Zeeman, P. (1907). "Sir William Crookes, F.R.S". Nature 77 (1984): 1–3. Bibcode:1907Natur..77....1C. doi:10.1038/077001a0.
31. ^ Dahl (1997:99).
32. ^ Frank Wilczek: "Happy Birthday, Electron" Scientific American, June 2012.
35. ^ Becquerel, H. (1900). "Déviation du Rayonnement du Radium dans un Champ Électrique". Comptes rendus de l'Académie des sciences (in French) 130: 809–815.
36. ^ Buchwald and Warwick (2001:90–91).
40. ^ Das Gupta, N.N.; Ghosh, S.K. (1999). "A Report on the Wilson Cloud Chamber and Its Applications in Physics". Reviews of Modern Physics 18 (2): 225–290. Bibcode:1946RvMP...18..225G. doi:10.1103/RevModPhys.18.225.
44. ^ a b Arabatzis, T.; Gavroglu, K. (1997). "The chemists' electron". European Journal of Physics 18 (3): 150–163. Bibcode:1997EJPh...18..150A. doi:10.1088/0143-0807/18/3/005.
48. ^ Uhlenbeck, G.E.; Goudsmith, S. (1925). "Ersetzung der Hypothese vom unmechanischen Zwang durch eine Forderung bezüglich des inneren Verhaltens jedes einzelnen Elektrons". Die Naturwissenschaften (in German) 13 (47): 953. Bibcode:1925NW.....13..953E. doi:10.1007/BF01558878.
49. ^ Pauli, W. (1923). "Über die Gesetzmäßigkeiten des anomalen Zeemaneffektes". Zeitschrift für Physik (in German) 16 (1): 155–164. Bibcode:1923ZPhy...16..155P. doi:10.1007/BF01327386.
53. ^ Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem". Annalen der Physik (in German) 385 (13): 437–490. Bibcode:1926AnP...385..437S. doi:10.1002/andp.19263851302.
60. ^ Elder, F.R.; et al. (1947). "Radiation from Electrons in a Synchrotron". Physical Review 71 (11): 829–830. Bibcode:1947PhRv...71..829E. doi:10.1103/PhysRev.71.829.5.
62. ^ Bernardini, C. (2004). "AdA: The First Electron–Positron Collider". Physics in Perspective 6 (2): 156–183. Bibcode:2004PhP.....6..156B. doi:10.1007/s00016-003-0202-y.
66. ^ Frampton, P.H.; Hung, P.Q.; Sher, Marc (2000). "Quarks and Leptons Beyond the Third Generation". Physics Reports 330 (5–6): 263–348. arXiv:hep-ph/9903387. Bibcode:2000PhR...330..263F. doi:10.1016/S0370-1573(99)00095-2.
68. ^ a b c d e f g h i The original source for CODATA is Mohr, P.J.; Taylor, B.N.; Newell, D.B. (2006). "CODATA recommended values of the fundamental physical constants". Reviews of Modern Physics 80 (2): 633–730. arXiv:0801.0028. Bibcode:2008RvMP...80..633M. doi:10.1103/RevModPhys.80.633.
70. ^ Murphy, M.T.; et al. (2008). "Strong Limit on a Variable Proton-to-Electron Mass Ratio from Molecules in the Distant Universe". Science 320 (5883): 1611–1613. arXiv:0806.3081. Bibcode:2008Sci...320.1611M. doi:10.1126/science.1156352. PMID 18566280.
71. ^ Zorn, J.C.; Chamberlain, G.E.; Hughes, V.W. (1963). "Experimental Limits for the Electron-Proton Charge Difference and for the Charge of the Neutron". Physical Review 129 (6): 2566–2576. Bibcode:1963PhRv..129.2566Z. doi:10.1103/PhysRev.129.2566.
74. ^ Gabrielse, G.; et al. (2006). "New Determination of the Fine Structure Constant from the Electron g Value and QED". Physical Review Letters 97 (3): 030802(1–4). Bibcode:2006PhRvL..97c0802G. doi:10.1103/PhysRevLett.97.030802.
75. ^ Eduard Shpolsky, Atomic physics (Atomnaia fizika),second edition, 1951
76. ^ Dehmelt, H. (1988). "A Single Atomic Particle Forever Floating at Rest in Free Space: New Value for Electron Radius". Physica Scripta T22: 102–10. Bibcode:1988PhST...22..102D. doi:10.1088/0031-8949/1988/T22/016.
78. ^ Steinberg, R.I.; et al. (1999). "Experimental test of charge conservation and the stability of the electron". Physical Review D 61 (2): 2582–2586. Bibcode:1975PhRvD..12.2582S. doi:10.1103/PhysRevD.12.2582.
79. ^ J. Beringer et al. (Particle Data Group) (2012 , 86, 010001 (2012)). "Review of Particle Physics: [electron properties]". Physical Review D 86 (1): 010001. Bibcode:2012PhRvD..86a0001B. doi:10.1103/PhysRevD.86.010001. Check date values in: |date= (help)
80. ^ Back, H. O.; et al. (2002). "Search for electron decay mode e → γ + ν with prototype of Borexino detector". Physics Letters B 525: 29–40. Bibcode:2002PhLB..525...29B. doi:10.1016/S0370-2693(01)01440-X.
81. ^ a b c d e Munowitz, M. (2005). Knowing, The Nature of Physical Law. Oxford University Press. ISBN 0-19-516737-6.
86. ^ Levine, I.; et al. (1997). "Measurement of the Electromagnetic Coupling at Large Momentum Transfer". Physical Review Letters 78 (3): 424–427. Bibcode:1997PhRvL..78..424L. doi:10.1103/PhysRevLett.78.424.
87. ^ Murayama, H. (March 10–17, 2006). "Supersymmetry Breaking Made Easy, Viable and Generic". "Proceedings of the XLIInd Rencontres de Moriond on Electroweak Interactions and Unified Theories". La Thuile, Italy. arXiv:0709.3041. —lists a 9% mass difference for an electron that is the size of the Planck distance.
90. ^ Foldy, L.L.; Wouthuysen, S. (1950). "On the Dirac Theory of Spin 1/2 Particles and Its Non-Relativistic Limit". Physical Review 78: 29–36. Bibcode:1950PhRv...78...29F. doi:10.1103/PhysRev.78.29.
91. ^ Sidharth, B.G. (2008). "Revisiting Zitterbewegung". International Journal of Theoretical Physics 48 (2): 497–506. arXiv:0806.0985. Bibcode:2009IJTP...48..497S. doi:10.1007/s10773-008-9825-8.
92. ^ Elliott, R.S. (1978). "The History of Electromagnetics as Hertz Would Have Known It". IEEE Transactions on Microwave Theory and Techniques 36 (5): 806–823. Bibcode:1988ITMTT..36..806E. doi:10.1109/22.3600.
94. ^ Mahadevan, R.; Narayan, R.; Yi, I. (1996). "Harmony in Electrons: Cyclotron and Synchrotron Emission by Thermal Electrons in a Magnetic Field". The Astrophysical Journal 465: 327–337. arXiv:astro-ph/9601073. Bibcode:1996ApJ...465..327M. doi:10.1086/177422.
95. ^ Rohrlich, F. (1999). "The Self-Force and Radiation Reaction". American Journal of Physics 68 (12): 1109–1112. Bibcode:2000AmJPh..68.1109R. doi:10.1119/1.1286430.
97. ^ Blumenthal, G.J.; Gould, R. (1970). "Bremsstrahlung, Synchrotron Radiation, and Compton Scattering of High-Energy Electrons Traversing Dilute Gases". Reviews of Modern Physics 42 (2): 237–270. Bibcode:1970RvMP...42..237B. doi:10.1103/RevModPhys.42.237.
100. ^ Beringer, R.; Montgomery, C.G. (1942). "The Angular Distribution of Positron Annihilation Radiation". Physical Review 61 (5–6): 222–224. Bibcode:1942PhRv...61..222B. doi:10.1103/PhysRev.61.222.
101. ^ Buffa, A. (2000). College Physics (4th ed.). Prentice Hall. p. 888. ISBN 0-13-082444-5.
102. ^ Eichler, J. (2005). "Electron–positron pair production in relativistic ion–atom collisions". Physics Letters A 347 (1–3): 67–72. Bibcode:2005PhLA..347...67E. doi:10.1016/j.physleta.2005.06.105.
103. ^ Hubbell, J.H. (2006). "Electron positron pair production by photons: A historical overview". Radiation Physics and Chemistry 75 (6): 614–623. Bibcode:2006RaPC...75..614H. doi:10.1016/j.radphyschem.2005.10.008.
104. ^ Quigg, C. (June 4–30, 2000). "The Electroweak Theory". "TASI 2000: Flavor Physics for the Millennium". Boulder, Colorado. p. 80. arXiv:hep-ph/0204104.
107. ^ a b Grupen, C. (2000). "Physics of Particle Detection". AIP Conference Proceedings 536: 3–34. arXiv:physics/9906063. doi:10.1063/1.1361756.
111. ^ Daudel, R.; et al. (1973). "The Electron Pair in Chemistry". Canadian Journal of Chemistry 52 (8): 1310–1320. doi:10.1139/v74-201.
113. ^ Freeman, G.R.; March, N.H. (1999). "Triboelectricity and some associated phenomena". Materials Science and Technology 15 (12): 1454–1458. doi:10.1179/026708399101505464.
114. ^ Forward, K.M.; Lacks, D.J.; Sankaran, R.M. (2009). "Methodology for studying particle–particle triboelectrification in granular materials". Journal of Electrostatics 67 (2–3): 178–183. doi:10.1016/j.elstat.2008.12.002.
122. ^ Durrant, A. (2000). Quantum Physics of Matter: The Physical World. CRC Press. pp. 43, 71–78. ISBN 0-7503-0721-8.
124. ^ Kadin, A.M. (2007). "Spatial Structure of the Cooper Pair". Journal of Superconductivity and Novel Magnetism 20 (4): 285–292. arXiv:cond-mat/0510279. doi:10.1007/s10948-006-0198-z.
126. ^ Jompol, Y.; et al. (2009). "Probing Spin-Charge Separation in a Tomonaga-Luttinger Liquid". Science 325 (5940): 597–601. arXiv:1002.2782. Bibcode:2009Sci...325..597J. doi:10.1126/science.1171769. PMID 19644117.
133. ^ Kolb, E.W.; Wolfram, Stephen (1980). "The Development of Baryon Asymmetry in the Early Universe". Physics Letters B 91 (2): 217–221. Bibcode:1980PhLB...91..217K. doi:10.1016/0370-2693(80)90435-9.
134. ^ Sather, E. (Spring–Summer 1996). "The Mystery of Matter Asymmetry". Beam Line. University of Stanford. Retrieved 2008-11-01.
136. ^ Boesgaard, A.M.; Steigman, G. (1985). "Big bang nucleosynthesis – Theories and observations". Annual Review of Astronomy and Astrophysics 23 (2): 319–378. Bibcode:1985ARA&A..23..319B. doi:10.1146/annurev.aa.23.090185.001535.
137. ^ a b Barkana, R. (2006). "The First Stars in the Universe and Cosmic Reionization". Science 313 (5789): 931–934. arXiv:astro-ph/0608450. Bibcode:2006Sci...313..931B. doi:10.1126/science.1125644. PMID 16917052.
138. ^ Burbidge, E.M.; et al. (1957). "Synthesis of Elements in Stars". Reviews of Modern Physics 29 (4): 548–647. Bibcode:1957RvMP...29..547B. doi:10.1103/RevModPhys.29.547.
139. ^ Rodberg, L.S.; Weisskopf, V. (1957). "Fall of Parity: Recent Discoveries Related to Symmetry of Laws of Nature". Science 125 (3249): 627–633. Bibcode:1957Sci...125..627R. doi:10.1126/science.125.3249.627. PMID 17810563.
141. ^ Parikh, M.K.; Wilczek, F. (2000). "Hawking Radiation As Tunneling". Physical Review Letters 85 (24): 5042–5045. arXiv:hep-th/9907001. Bibcode:2000PhRvL..85.5042P. doi:10.1103/PhysRevLett.85.5042. PMID 11102182.
143. ^ Halzen, F.; Hooper, D. (2002). "High-energy neutrino astronomy: the cosmic ray connection". Reports on Progress in Physics 66 (7): 1025–1078. arXiv:astro-ph/0204527. Bibcode:2002astro.ph..4527H. doi:10.1088/0034-4885/65/7/201.
147. ^ Gurnett, D.A.; Anderson, R. (1976). "Electron Plasma Oscillations Associated with Type III Radio Bursts". Science 194 (4270): 1159–1162. Bibcode:1976Sci...194.1159G. doi:10.1126/science.194.4270.1159. PMID 17790910.
151. ^ Ekstrom, P.; Wineland, David (1980). "The isolated Electron". Scientific American 243 (2): 91–101. doi:10.1038/scientificamerican0880-104. Retrieved 2008-09-24.
152. ^ Mauritsson, J. "Electron filmed for the first time ever". Lund University. Archived from the original on March 25, 2009. Retrieved 2008-09-17.
153. ^ Mauritsson, J.; et al. (2008). "Coherent Electron Scattering Captured by an Attosecond Quantum Stroboscope". Physical Review Letters 100 (7): 073003. arXiv:0708.1060. Bibcode:2008PhRvL.100g3003M. doi:10.1103/PhysRevLett.100.073003. PMID 18352546. [dead link]
154. ^ Damascelli, A. (2004). "Probing the Electronic Structure of Complex Systems by ARPES". Physica Scripta T109: 61–74. arXiv:cond-mat/0307085. Bibcode:2004PhST..109...61D. doi:10.1238/Physica.Topical.109a00061.
158. ^ Benedict, G.F. (1987). Nontraditional Manufacturing Processes. Manufacturing engineering and materials processing 19. CRC Press. p. 273. ISBN 0-8247-7352-7.
159. ^ Ozdemir, F.S. (June 25–27, 1979). "Electron beam lithography". "Proceedings of the 16th Conference on Design automation". San Diego, CA, USA: IEEE Press. pp. 383–391. Retrieved 2008-10-16.
161. ^ Jongen, Y.; Herer, A. (May 2–5, 1996). "Electron Beam Scanning in Industrial Applications". "APS/AAPT Joint Meeting". American Physical Society. Bibcode:1996APS..MAY.H9902J.
162. ^ Mobus G. et al. (2010). Journal of Nuclear Materials, v. 396, 264–271, doi:10.1016/j.jnucmat.2009.11.020
163. ^ Beddar, A.S.; Domanovic, Mary Ann; Kubu, Mary Lou; Ellis, Rod J.; Sibata, Claudio H.; Kinsella, Timothy J. (2001). "Mobile linear accelerators for intraoperative radiation therapy". AORN Journal 74 (5): 700. doi:10.1016/S0001-2092(06)61769-9. Retrieved 2008-10-26.
164. ^ Gazda, M.J.; Coia, L.R. (June 1, 2007). "Principles of Radiation Therapy". Retrieved 2013-10-31.
166. ^ Oura, K.; et al. (2003). Surface Science: An Introduction. Springer. pp. 1–45. ISBN 3-540-00545-5.
168. ^ Heppell, T.A. (1967). "A combined low energy and reflection high energy electron diffraction apparatus". Journal of Scientific Instruments 44 (9): 686–688. Bibcode:1967JScI...44..686H. doi:10.1088/0950-7671/44/9/311.
External links[edit] |
62a02f3525557022 | Ab Initio Description of Open-Shell Nuclei: Merging No-Core Shell Model and In-Medium Similarity Renormalization Group Approaches
The central task of ab initio nuclear structure theory is to solve the many-body Schrödinger equation for specific nuclei of interest. This equation can be cast as an eigenvalue problem that has to be solved either explicitly by diagonalizing extremely large matrices or implicitly by transforming the underlying nuclear Hamiltonian to a simplified structure.
A joint team of researchers from NSCL/FRIB and TU Darmstadt have proposed to merge two such explicit and implicit approaches, namely the No-Core Shell Model (NCSM) and the In-Medium Similarity Renormalization Group, in order to combine their advantages. The resulting method is the so-called In-Medium (IM) NCSM. In this approach, a continuous renormalization group transformation simplifies the structure of the nuclear Hamiltonian, and greatly accelerates the convergence of the NCSM diagonalization towards a controlled. Since the IM-NCSM is at its core a diagonalization method, it provides immediate and convenient access to the low-lying states of medium-mass nuclei, and the resulting wave functions can be used to confront a multitude of observables with experiment.
The IM-NCSM and its first applications are described in Phys. Rev. Lett. 118, 152503 (2017). |
11ec885a41c19c69 | Take the 2-minute tour ×
Imagine you're teaching a first course on quantum mechanics in which your students are well-versed in classical mechanics, but have never seen any quantum before. How would you motivate the subject and convince your students that in fact classical mechanics cannot explain the real world and that quantum mechanics, given your knowledge of classical mechanics, is the most obvious alternative to try?
If you sit down and think about it, the idea that the state of a system, instead of being specified by the finitely many particles' position and momentum, is now described by an element of some abstract (rigged) Hilbert space and that the observables correspond to self-adjoint operators on the space of states is not at all obvious. Why should this be the case, or at least, why might we expect this to be the case?
Then there is the issue of measurement which is even more difficult to motivate. In the usual formulation of quantum mechanics, we assume that, given a state $|\psi \rangle$ and an observable $A$, the probability of measuring a value between $a$ and $a+da$ is given by $|\langle a|\psi \rangle |^2da$ (and furthermore, if $a$ is not an eigenvalue of $A$, then the probability of measuring a value in this interval is $0$). How would you convince your students that this had to be the case?
I have thought about this question of motivation for a couple of years now, and so far, the only answers I've come up with are incomplete, not entirely satisfactory, and seem to be much more non-trivial than I feel they should be. So, what do you guys think? Can you motivate the usual formulation of quantum mechanics using only classical mechanics and minimal appeal to experimental results?
Note that, at some point, you will have to make reference to experiment. After all, this is the reason why we needed to develop quantum mechanics. In principle, we could just say "The Born Rule is true because its experimentally verified.", but I find this particularly unsatisfying. I think we can do better. Thus, I would ask that when you do invoke the results of an experiment, you do so to only justify fundamental truths, by which I mean something that can not itself just be explained in terms of more theory. You might say that my conjecture is that the Born Rule is not a fundamental truth in this sense, but can instead be explained by more fundamental theory, which itself is justified via experiment.
Edit: To clarify, I will try to make use of a much simpler example. In an ideal gas, if you fix the volume, then the temperature is proportional to pressure. So we may ask "Why?". You could say "Well, because experiment.", or alternatively you could say "It is a trivial corollary of the ideal gas law.". If you choose the latter, you can then ask why that is true. Once again, you can just say "Because experiment." or you could try to prove it using more fundamental physical truths (using the kinetic theory of gases, for example). The objective, then, is to come up with the most fundamental physical truths, prove everything else we know in terms of those, and then verify the fundamental physical truths via experiment. And in this particular case, the objective is to do this with quantum mechanics.
share|improve this question
"making as little reference to experiment as possible" !!! The only reason we have developed quantum mechanics is because the experimental evidence demanded it and demands it. – anna v Dec 5 '12 at 17:59
Are you looking for a derivation from simple physical principles a la Einstein's derivation of relativity from his two postulates? That is the basic open question in quantum foundations, isn't it? – Emilio Pisanty Dec 5 '12 at 18:58
right, then the definitive answer is that such an argument does not exist. plenty of people devote their academic careers to answering your question, as Emilio implied, and no one is in agreement as to the correct answer, yet. if you are interested in this, then you should look up the work of Rob Spekkens. also, Chris Fuchs, Lucien Hardy, Jonathan Barrett, and probably a bunch of other people too. – Mark Mitchison Dec 5 '12 at 20:16
um...not necessarily. It's just that I think that I now understand the intent of the OP's question, and if I do - it cannot be put better than Emilio did - it is simply 'the standard open question of quantum foundations'. I know enough people working in this field to know that the experts do not consider this question at all resolved. – Mark Mitchison Dec 5 '12 at 20:26
Hey Johnny! Hope all is well. As to your question, I really feel that you can not talk about quantum mechanics in the way you envision while giving your students as solid an understanding when you approach from an experimental view point. I think the closest you could get would be to talk about the problems encountered before quantum mechanics and how quantum mechanics was realized and built after that; this would merely omit information on the experiments, equally unsatisfying. It is a tough question! – Dylan Sabulsky Dec 6 '12 at 1:17
14 Answers 14
I am late to this party here, but I can maybe advertize something pretty close to a derivation of quantum mechanics from pairing classical mechanics with its natural mathematical context, namely with Lie theory. I haven't had a chance yet to try the following on first-year students, but I am pretty confident that with just a tad more pedagogical guidance thrown in as need be, the following should make for a rather satisfactory motivation for any student with a little bit of mathematical/theoretical physics inclination.
For more along the following lines see at nLab:quantization.
Quantization of course was and is motivated by experiment, hence by observation of the observable universe: it just so happens that quantum mechanics and quantum field theory correctly account for experimental observations, where classical mechanics and classical field theory gives no answer or incorrect answers. A historically important example is the phenomenon called the “ultraviolet catastrophe”, a paradox predicted by classical statistical mechanics which is not observed in nature, and which is corrected by quantum mechanics.
But one may also ask, independently of experimental input, if there are good formal mathematical reasons and motivations to pass from classical mechanics to quantum mechanics. Could one have been led to quantum mechanics by just pondering the mathematical formalism of classical mechanics? (Hence more precisely: is there a natural Synthetic Quantum Field Theory?)
The following spells out an argument to this extent. It will work for readers with a background in modern mathematics, notably in Lie theory, and with an understanding of the formalization of classical/prequantum mechanics in terms of symplectic geometry.
So to briefly recall, a system of classical mechanics/prequantum mechnanics is a phase space, formalized as a symplectic manifold $(X,ω)$. A symplectic manifold is in particular a Poisson manifold, which means that the algebra of functions on phase space $X$, hence the algebra of classical observables, is canonically equipped with a compatible Lie bracket: the Poisson bracket. This Lie bracket is what controls dynamics in classical mechanics. For instance if $H\in C^{∞}(X)$ is the function on phase space which is interpreted as assigning to each configuration of the system its energy – the Hamiltonian function – then the Poisson bracket with $H$ yields the infinitesimal time evolution of the system: the differential equation famous as Hamilton's equations.
To take notice of here is the infinitesimal nature of the Poisson bracket. Generally, whenever one has a Lie algebra $\mathfrak{g}$, then it is to be regarded as the infinitesimal approximation to a globally defined object, the corresponding Lie group (or generally smooth group) $G$. One also says that $G$ is a Lie integration of $\mathfrak{g}$ and that $\mathfrak{g}$ is the Lie differentiation of $G$.
Therefore a natural question to ask is: Since the observables in classical mechanics form a Lie algebra under Poisson bracket, what then is the corresponding Lie group?
The answer to this is of course “well known” in the literature, in the sense that there are relevant monographs which state the answer. But, maybe surprisingly, the answer to this question is not (at time of this writing) a widely advertized fact that would have found its way into the basic educational textbooks. The answer is that this Lie group which integrates the Poisson bracket is the “quantomorphism group”, an object that seamlessly leads over to the quantum mechanics of the system.
Before we say this in more detail, we need a brief technical aside: of course Lie integration is not quite unique. There may be different global Lie group objects with the same Lie algebra.
The simplest example of this is already the one of central importance for the issue of quantization, namely the Lie integration of the abelian line Lie algebra $\mathbb{R}$. This has essentially two different Lie groups associated with it: the simply connected translation group, which is just $\mathbb{R}$ itself again, equipped with its canonical additive abelian group structure, and the discrete quotient of this by the group of integers, which is the circle group
$$ U(1) = \mathbb{R}/\mathbb{Z} \,. $$
Notice that it is the discrete and hence “quantized” nature of the integers that makes the real line become a circle here. This is not entirely a coincidence of terminology, but can be traced back to be at the heart of what is “quantized” about quantum mechanics.
Namely one finds that the Poisson bracket Lie algebra $\mathfrak{poiss}(X,ω)$ of the classical observables on phase space is (for X a connected manifold) a Lie algebra extension of the Lie algebra $\mathfrak{ham}(X)$ of Hamiltonian vector fields on $X$ by the line Lie algebra:
$$ \mathbb{R} \longrightarrow \mathfrak{poiss}(X,\omega) \longrightarrow \mathfrak{ham}(X) \,. $$
This means that under Lie integration the Poisson bracket turns into a central extension of the group of Hamiltonian symplectomorphisms of $(X,ω)$. And either it is the fairly trivial non-compact extension by $\mathbb{R}$, or it is the interesting central extension by the circle group $U(1)$. For this non-trivial Lie integration to exist, $(X,ω)$ needs to satisfy a quantization condition which says that it admits a prequantum line bundle. If so, then this $U(1)$-central extension of the group $Ham(X,\omega)$ of Hamiltonian symplectomorphisms exists and is called… the quantomorphism group $QuantMorph(X,\omega)$:
$$ U(1) \longrightarrow QuantMorph(X,\omega) \longrightarrow Ham(X,\omega) \,. $$
While important, for some reason this group is not very well known. Which is striking, because there is a small subgroup of it which is famous in quantum mechanics: the Heisenberg group.
More exactly, whenever $(X,\omega)$ itself has a compatible group structure, notably if $(X,\omega)$ is just a symplectic vector space (regarded as a group under addition of vectors), then we may ask for the subgroup of the quantomorphism group which covers the (left) action of phase space $(X,\omega)$ on itself. This is the corresponding Heisenberg group $Heis(X,\omega)$, which in turn is a $U(1)$-central extension of the group $X$ itself:
$$ U(1) \longrightarrow Heis(X,\omega) \longrightarrow X \,. $$
At this point it is worthwhile to pause for a second and note how the hallmark of quantum mechanics has appeared as if out of nowhere from just applying Lie integration to the Lie algebraic structures in classical mechanics:
if we think of Lie integrating $\mathbb{R}$ to the interesting circle group $U(1)$ instead of to the uninteresting translation group $\mathbb{R}$, then the name of its canonical basis element 1∈ℝ is canonically ”i”, the imaginary unit. Therefore one often writes the above central extension instead as follows:
$$ i \mathbb{R} \longrightarrow \mathfrak{poiss}(X,\omega) \longrightarrow \mathfrak{ham}(X,\omega) $$
in order to amplify this. But now consider the simple special case where $(X,\omega)=(\mathbb{R}^{2},dp∧dq)$ is the 2-dimensional symplectic vector space which is for instance the phase space of the particle propagating on the line. Then a canonical set of generators for the corresponding Poisson bracket Lie algebra consists of the linear functions p and q of classical mechanics textbook fame, together with the constant function. Under the above Lie theoretic identification, this constant function is the canonical basis element of $i\mathbb{R}$, hence purely Lie theoretically it is to be called ”i”.
With this notation then the Poisson bracket, written in the form that makes its Lie integration manifest, indeed reads
$$ [q,p] = i \,. $$
Since the choice of basis element of $i\mathbb{R}$ is arbitrary, we may rescale here the i by any non-vanishing real number without changing this statement. If we write ”ℏ” for this element, then the Poisson bracket instead reads
$$ [q,p] = i \hbar \,. $$
This is of course the hallmark equation for quantum physics, if we interpret ℏ here indeed as Planck's constant. We see it arise here by nothing but considering the non-trivial (the interesting, the non-simply connected) Lie integration of the Poisson bracket.
This is only the beginning of the story of quantization, naturally understood and indeed “derived” from applying Lie theory to classical mechanics. From here the story continues. It is called the story of geometric quantization. We close this motivation section here by some brief outlook.
The quantomorphism group which is the non-trivial Lie integration of the Poisson bracket is naturally constructed as follows: given the symplectic form $ω$, it is natural to ask if it is the curvature 2-form of a $U(1)$-principal connection $∇$ on complex line bundle $L$ over $X$ (this is directly analogous to Dirac charge quantization when instead of a symplectic form on phase space we consider the the field strength 2-form of electromagnetism on spacetime). If so, such a connection $(L,∇)$ is called a prequantum line bundle of the phase space $(X,ω)$. The quantomorphism group is simply the automorphism group of the prequantum line bundle, covering diffeomorphisms of the phase space (the Hamiltonian symplectomorphisms mentioned above).
As such, the quantomorphism group naturally acts on the space of sections of $L$. Such a section is like a wavefunction, instead that it depends on all of phase space, instead of just on the “canonical coordinates”. For purely abstract mathematical reasons (which we won’t discuss here, but see at motivic quantization for more) it is indeed natural to choose a “polarization” of phase space into canonical coordinates and canonical momenta and consider only those sections of the prequantum line bundle which depend on just the former. These are the actual wavefunctions of quantum mechanics, hence the quantum states. And the subgroup of the quantomorphism group which preserves these polarized sections is the group of exponentiated quantum observables. For instance in the simple case mentioned before where $(X,ω)$ is the 2-dimensional symplectic vector space, this is the Heisenberg group with its famous action by multiplication and differentiation operators on the space of complex-valued functions on the real line.
For more along these lines see at nLab:quantization.
share|improve this answer
Dear Urs: fantastic answer. I fixed up some of the fraktur and other symbols which hadn't seemed to come across from your nLab article so well: you might like to check correctness. – WetSavannaAnimal aka Rod Vance Oct 20 '14 at 11:29
Can you, for a concrete simple example in quantum mechanics, follow this procedure (take classical geometry, choose circle group bundle with connection, write down the expression which amounts to the integration "in the $i$-direction") and express the observables $\langle H\rangle, \langle P\rangle, \dots$ in therms of this. Is there the harmonic oscillator, starting from the classical Hamiltonian, worked out with emphasis on exactly those bundles? – NikolajK Oct 27 '14 at 11:12
Yes, this is called "geometric quantization". It's standard. (I was just offering a motivation for existing theory, not a new theory.) The geometric quantization of the standard examples (e.g. harmonic oscillator) is in all the standard textbooks and lecture notes, take your pick here: ncatlab.org/nlab/show/… – Urs Schreiber Oct 27 '14 at 11:39
Why would you ever try to motivate a physical theory without appealing to experimental results??? The motivation of quantum mechanics is that it explains experimental results. It is obvious that you would choose a simpler, more intuitive picture than quantum mechanics if you weren't interested in predicting anything.
If you are willing to permit some minimal physical input, then how about this: take the uncertainty principle as a postulate. Then you know that the effect on a system of doing measurement $A$ first, then measurement $B$, is different from doing $B$ first then $A$. That can be written down symbolically as $AB \neq BA$ or even $[A,B] \neq 0$. What kind of objects don't obey commutative multiplication? Linear operators acting on vectors! It follows that observables are operators and "systems" are somehow vectors. The notion of "state" is a bit more sophisticated and doesn't really follow without reference to measurement outcomes (which ultimately needs the Born rule). You could also argue that this effect must vanish in the classical limit, so then you must have $[A,B] \sim \hbar $, where $\hbar$ is some as-yet (and never-to-be, if you refuse to do experiments) undetermined number that must be small compared to everyday units. I believe this is similar to the original reasoning behind Heisenberg's matrix formulation of QM.
The problem is that this isn't physics, you don't know how to predict anything without the Born rule. And as far as I know there is no theoretical derivation of the Born rule, it is justified experimentally!
If you want a foundations viewpoint on why QM rather than something else, try looking into generalised probabilistic theories, e.g. this paper. But I warn you, these provide neither a complete, simple nor trivial justification for the QM postulates.
share|improve this answer
See edit to question. Obviously, you'e going to have to appeal to experiment somewhere, but I feel as if the less we have to reference experiment, the more eloquent the answer would be. – Jonathan Gleason Dec 5 '12 at 18:15
i disagree entirely on that point, but obviously that's personal aesthetics. surely if you don't find experiments a beautiful proof, it would be better to argue that quantum mechanics is the most mathematically elegant physical theory possible, and thereby remove the pesky notion of those dirty, messy experiments completely! – Mark Mitchison Dec 5 '12 at 19:35
This seems to be a good starting point, but the problem is that measurements are not linar operators acting on vectors... But perhaps the example can be adapted. – Bzazz Feb 2 '13 at 11:35
@Bzazz Huh? The outcome of a (von Neumann) measurement is given by the projection of the initial state vector onto one of the eigenspaces of the operator describing the observable. That projection certainly is a linear, Hermitian operator. If the observables don't commute, they don't share the same eigenvectors, and therefore the order of projections matters. – Mark Mitchison Feb 4 '13 at 20:40
(contd.) In the more general case, a measurement is described by a CP map, which is a linear operator over the (vector) space of density matrices. The CP map can always be described by a von Neumann projection in a higher-dimensional space, and the same argument holds. – Mark Mitchison Feb 4 '13 at 20:41
You should use history of physics to ask them questions where classical physics fail. For example, you can tell them result of Rutherford's experiment and ask: If an electron is orbiting around nucleus, it means a charge is in acceleration. So, electrons should release electromagnetic energy. If that's the case, electrons would loose its energy to collapse on Nucleus which would cease the existence of atom within a fraction of second (you can tell them to calculate). But, as we know, atoms have survived billions of years. How? Where's the catch?
share|improve this answer
+1 I also think using the history of physics is an excellent strategy, and is has the added value of learning the history of physics! The conundrum of the electron not collapsing into the nucleus is a wonderful example, I also suggested the UV catastrophe, which doesn't appeal to any experimental results. – Joe Feb 2 '13 at 9:22
If I would be designing an introduction to quantum physics course for physics undergrads, I would seriously consider starting from the observed Bell-GHZ violations. Something along the lines of David Mermin's approach. If there is one thing that makes clear that no form of classical physics can provide the deepest law of nature, this is it. (This does make reference to experimental facts, albeit more of a gedanken nature. As others have commented, some link to experiments is, and should be, unavoidable.)
share|improve this answer
Excellent answer. What would be really fascinating would be to show Einstein the Bell-GHZ violations. I can't help but wonder what he would make of it. To me, these experiments confirm his deepest concern -- spooky action at a distance! – user7348 Dec 7 '12 at 16:33
Some time ago I have contemplated Einstein's reaction (science20.com/hammock_physicist/…): "Einstein would probably have felt his famous physics intuition had lost contact with reality, and he would certainly happily have admitted that Feynman's claim "nobody understands quantum physics" makes no exception for him. I would love to hear the words that the most quotable physicist would have uttered at the occasion. Probably something along the lines "Magical is the Lord, magical in subtle and deceitful ways bordering on maliciousness"." – Johannes Dec 7 '12 at 19:14
Near the end of Dirac's career, he wrote “And, I think it might turn out that ultimately Einstein will prove to be right, ... that it is quite likely that at some future time we may get an improved quantum mechanics in which there will be a return to determinism and which will, therefore, justify the Einstein point of view. But such a return to deteminism could only be made at the expense of giving up some other basic idea which we now asume without question. We would have to pay for it in some way which we cannot yet guess at, if we are to re-introduce determinism.” Directions in Physics p.10 – joseph f. johnson Feb 11 '13 at 9:43
Wonderful article. Even if Einstein had done nothing else than criticize QM, he would still be one of the greatest scientists ever. How long would it have taken us to go looking for these now experimental facts without the EPR paradox? – WetSavannaAnimal aka Rod Vance Aug 31 '13 at 0:33
Though there are many good answers here, I believe I can still contribute something which answers a small part of your question.
There is one reason to look for a theory beyond classical physics which is purely theoretical and this is the UV catastrophe. According to the classical theory of light, an ideal black body at thermal equilibrium will emit radiation with infinite power. This is a fundamental theoretical problem, and there is no need to appeal to any experimental results to understand it, a theory which predicts infinite emitted power is wrong.
The quantization of light solves the problem, and historically this played a role in the development of quantum mechanics.
Of course this doesn't point to any of the modern postulates of quantum mechanics you're looking to justify, but I think it's still good to use the UV catastrophe as one of the motivations to look for a theory beyond classical physics in the first place, especially if you want to appeal as little as necessary to experimental results.
share|improve this answer
It is a shame that statistical mechanics is not more widely taught. But, hey, we live in an age when Physics depts don't even teach Optics at the undergrad level anymore... Now the O.P. postulated a context where the students understood advanced Mechanics. So I fear the UV catastrophe, although historically and conceptually most important, will not ring a bell with that audience. – joseph f. johnson Feb 11 '13 at 10:01
All the key parts of quantum mechanics may be found in classical physics.
1) In statistical mechanics the system is also described by a distribution function. No definite coordinates, no definite momenta.
2) Hamilton made his formalism for classical mechanics. His ideas were pretty much in line with ideas which were put into modern quantum mechanics long before any experiments: he tried to make physics as geometrical as possible.
3) From Lie algebras people knew that the translation operator has something to do with the derivative. From momentum conservation people knew that translations have something to do with momentum. It was not that strange to associate momentum with the derivative.
Now you should just mix everything: merge statistical mechanics with the Hamiltonian formalism and add the key ingredient which was obvious to radio-physicists: that you can not have a short (i.e, localized) signal with a narrow spectrum.
Voila, you have quantum mechanics.
In principle, for your purposes, Feynman's approach to quantum mechanics may be more "clear". It was found long after the other two approaches, and is much less productive for the simple problems people usually consider while studying. That's why it is not that popular for starters. However, it might be simpler from the philosophical point of view. And we all know that it is equivalent to the other approaches.
share|improve this answer
As an initial aside, there is nothing uniquely ‘quantum’ about non commuting operators or formulating mechanics in a Hilbert space as demonstrated by Koopman–von Neumann mechanics, and there is nothing uniquely ‘classical’ about a phase space coordinate representation of mechanics as shown by Groenewold and Moyal’s formulation of Quantum theory.
There does of course however exist a fundamental difference between quantum and classical theories. There are many ways of attempting to distil this difference, whether it is seen as non-locality, uncertainty or the measurement problem, the best way of isolating what distinguishes them that I have heard is this:
Quantum mechanics is about how probability phase and probability amplitude interact. This is what is fundamentally lacking in Hilbert space formulations of classical mechanics, where the phase and amplitude evolution equations are fully decoupled. It is this phase-amplitude interaction that gives us the wave-particle behaviour, electron diffraction in the two slits experiment, and hence an easy motivation for (and probably the most common entry route into) quantum mechanics. This phase-amplitude interaction is also fundamental to understanding canonically conjugate variables and the uncertainty problem.
I think that if this approach were to be taken, the necessity of a different physical theory can be most easily initially justified by single-particle interference, which then leads on to the previously mentioned points.
share|improve this answer
So far as I understand, you are asking for a minimalist approach to quantum mechanics which would motivate its study with little reference to experiments.
The bad. So far as I know, there is not a single experiment or theoretical concept that can motivate your students about the need to introduce Dirac kets $|\Psi\rangle$, operators, Hilbert spaces, the Schrödinger equation... all at once. There are two reasons for this and both are related. First, the ordinary wavefunction or Dirac formulation of quantum mechanics is too different from classical mechanics. Second, the ordinary formulation was developed in pieces by many different authors who tried to explain the results of different experiments --many authors won a Nobel prize for the development of quantum mechanics--. This explains why "for a couple of years now", the only answers you have come up with are "incomplete, not entirely satisfactory".
The good. I believe that one can mostly satisfy your requirements by using the modern Wigner & Moyal formulation of quantum mechanics, because this formulation avoids kets, operators, Hilbert spaces, the Schrödinger equation... In this modern formulation, the relation between the classical (left) and the quantum (right) mechanics axioms are
$$A(p,x) \rho(p,x) = A \rho(p,x) ~~\Longleftrightarrow~~ A(p,x) \star \rho^\mathrm{W}(p,x) = A \rho^\mathrm{W}(p,x)$$
$$\frac{\partial \rho}{\partial t} = \{H, \rho\} ~~\Longleftrightarrow~~ \frac{\partial \rho^\mathrm{W}}{\partial t} = \{H, \rho^\mathrm{W}\}_\mathrm{MB}$$
$$\langle A \rangle = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho(p,x) ~~\Longleftrightarrow~~ \langle A \rangle = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho^\mathrm{W}(p,x)$$
where $\star$ is the Moyal star product, $\rho^\mathrm{W}$ the Wigner distribution and $\{ , \}_\mathrm{MB}$ the Moyal bracket. The functions $A(p,x)$ are the same than in classical mechanics. An example of the first quantum equation is $H \star \rho_E^\mathrm{W} = E \rho_E^\mathrm{W}$ which gives the energy eigenvalues.
Now the second part of your question. What is the minimal motivation for the introduction of the quantum expressions at the right? I think that it could be as follows. There are a number of experiments that suggest a dispersion relation $\Delta p \Delta x \geq \hbar/2$, which cannot be explained by classical mechanics. This experimental fact can be used as motivation for the substitution of the commutative phase space of classical mechanics by a non-commutative phase space. Mathematical analysis of the non-commutative geometry reveals that ordinary products in phase space have to be substituted by start products, the classical phase space state has to be substituted by one, $\rho^\mathrm{W}$, which is bounded to phase space regions larger than Planck length--, and Poisson brackets have to be substituted by Moyal brackets.
Although this minimalist approach cannot be obtained by using the ordinary wavefunction or Dirac formalism, there are three disadvantages with the Wigner & Moyal approach however. (i) The mathematical analysis is very far from trivial. The first quantum equation of above is easily derived by substituting the ordinary product by a start product and $\rho \rightarrow \rho^\mathrm{W}$ in the classical expression. The third quantum equation can be also obtained in this way, because it can be shown that
$$ \int \mathrm{d}p \mathrm{d}x A(p,x) \star \rho^\mathrm{W}(p,x) = \int \mathrm{d}p \mathrm{d}x A(p,x) \rho^\mathrm{W}(p,x)$$
A priori one could believe that the second quantum equation is obtained in the same way. This does not work and gives an incorrect equation. The correct quantum equation of motion requires the substitution of the whole Poisson bracket by a Moyal bracket. Of course, the Moyal bracket accounts for the non-commutativity of the phase space, but there is not justification for its presence in the equation of motion from non-commutativity alone. In fact, this quantum equation of motion was originally obtained from the Liouville Von Neuman equation via the formal correspondence between the phase space and the Hilbert space, and any modern presentation of the Wigner & Moyal formulation that I know justifies the form of the quantum equation of motion via this formal correspondence. (ii) The theory is backward incompatible with classical mechanics, because the commutative geometry is entirely replaced by a non-commutative one. As a consequence, no $\rho^\mathrm{W}$ can represent a pure classical state --a point in phase space--. Notice that this incompatibility is also present in the ordinary formulations of quantum mechanics --for instance no wavefunction can describe a pure classical state completely--. (iii) The introduction of spin in the Wigner & Moyal formalism is somewhat artificial and still under active development.
The best? The above three disadvantages can be eliminated in a new phase space formalism which provides a 'minimalistic' approach to quantum mechanics by an improvement over geometrical quantisation. This is my own work and details and links will be disclosed in the comments or in a separated answer only if they are required by the community.
share|improve this answer
+1 for the detailed and interesting answer. But the statement that motivations of QM from a small number of physical principles 'do not work' is quite pessimistic. Much of the mathematical formulation (e.g. the Lorentz transformations) underpinning Einstein's work was already in place when he discovered relativity, precisely because people needed some equations that explained experiments. This situation may be analogous to the current state of affairs with QM, irrespective of quantisation scheme. Then Einstein came along and explained what it all means. Who's to say that won't happen again? – Mark Mitchison Dec 8 '12 at 13:28
@MarkMitchison: Thank you! I eliminated the remarks about Einstein and Weinberg (I did mean something close to what you and Emilio Pisanty wrote above) but my own explanation was a complete mess. I agree with you on that what Einstein did could happen again! Precisely I wrote in a paper dealing with foundations of QM: "From a conceptual point of view, the elimination of the wavefunctions from quantum theory is in line with the procedure inaugurated by Einstein with the elimination of the ether in the theory of electromagnetism." – juanrga Dec 8 '12 at 17:51
-1 I suppose the O.P. would like something that has some physical content or intuition linked to it, and that is lacking in what you suggest, so I do not think that pedagogically it would be very useful for this particular purpose. This is not meant as a criticism of its worth as a contribution to scholarship. – joseph f. johnson Feb 11 '13 at 9:34
@josephf.johnson Disagree. The phase space approach has more physical content and is more intuitive than the old wavefunction approach. – juanrga Feb 14 '13 at 19:32
I think you have misunderstood the whole point of what the O.P. was asking for, although your contribution might have been valuable as an answer to a different question. What your answer lacks is any compelling immanent critique of Classical Mechanics, an explanation of why it cannot possibly be true. And it wasn't experiments that suggested the Heisenberg uncertainty relations since the experiments then weren't good enough to get anywhere near the theoretical limits. Only recently have such fine measurements been attained. – joseph f. johnson Feb 14 '13 at 19:40
I always like to read "BERTLMANN'S SOCKS AND THE NATURE OF REALITY" * by J. Bell to remind myself when and why a classical description must fail.
He basically refers to the EPR-correlations. You could motivate his reasoning by comparing common set theory (e.g. try three different sets: A,B,C and try to merge them somehow) with the same concept of "sets" in Hilbert spaces and you will see that they are not equal (Bell's theorem).
share|improve this answer
It seems to me your question is essentially asking for a Platonic mathematical model of physics, underlying principles from which the quantum formalism could be justified and in effect derived. If so, that puts you in the minority (but growing) realist physicist camp as opposed to the vast majority of traditional instrumentalists.
The snag is the best if not only chance of developing a model like that requires either God-like knowledge or at least, with almost superhuman intuition, a correct guess at the underlying phenomena, and obviously nobody has yet achieved either sufficient to unify all of physics under a single rubrik along those lines.
In other words, ironically, to get at the most abstract explanation requires the most practical approach, rather as seeing at the smallest scales needs the largest microscope, such as the LHC, or Sherlock Holmes can arrive at the most unexpected conclusion only with sufficient data (Facts, Watson, I need more facts!)
So, despite being a fellow realist, I do see that instrumentalism (being content to model effects without seeking root causes, what might be compared with "black box testing") has been and remains indispensable.
share|improve this answer
-1 This is most unfair to the O.P. Why use labels like «Platonist»? The O.P. only asks for two things: an obvious and fundamental problem with Classical Mechanics, & a motivation for trying QM as the most obvious alternative. Asking for motivation is not asking for Platonist derivations nor is asking why should we give QM a chance asking for a derivation. The only physics in your answer is the Fourier transform version of the uncertainty principle, when you remark about fine resolution needing a large microscope. But the OP asks you to motivate that principle, and you merely assert it. – joseph f. johnson Feb 11 '13 at 10:14
I highly recommend this introductory lecture on Quantum Mechanics: http://www.youtube.com/watch?v=TcmGYe39XG0
share|improve this answer
Thomas's Calculus has an instructive Newtonian Mechanics exercise which everyone ought to ponder: the gravitational field strength inside the Earth is proportional to the distance from the centre, and so is zero at the centre. And, of course, there is the rigorous proof that if the matter is uniformly distributed in a sphere, then outside the sphere it exerts a gravitational force identical to what would have been exerted if all the mass had been concentrated at the centre.
Now if one ponders this from a physical point of view, «what is matter», one ends up with logical and physical difficulties that were only answered by de Broglie and Schroedinger's theory of matter waves.
This also grows out of pondering Dirac's wise remark: if «big» and «small» are mereley relative terms, there is no use in explaining the big in terms of the small...there must be an absolute meaning to size.
Is matter a powder or fluid that is evenly and continuously distributed and can take on any density (short of infinity)? Then that sphere of uniformly distributed matter must shrink to a point of infinite density in a finite amount of time.... Why should matter be rigid and incompressible? Really, this is inexplicable without the wave theory of matter. Schroedinger's equation shows that if, for some reason, a matter wave starts to compress, then it experiences a restoring force to oppose the compression, so that it can not proceed past a certain point (without pouring more energy into it).
See the related http://physics.stackexchange.com/a/18421/6432 . Only this can explain why the concept of «particle» can have some validity and not need something smaller still to explain it.
share|improve this answer
A few millenia ago, an interesting article was published in The Annals of Mathematics about Newtonian particle mechanics: a system of seven particles and specific initial conditions were discovered whereby one of the particles is whipped up to infinite velocity in finite time without any collisions. But this is not quite decisive enough for your purposes. And Earnshaw's theorem is a little too advanced, although it is often mentioned in your context (e.g., by Feynman and my own college teacher, Prof. Davidon). – joseph f. johnson Feb 11 '13 at 9:56
This is a late in coming relevant comment to the teaching problem you have (but not answer - I tried commenting but it was getting too big).
Something you might mention in your class is modern control systems theory as taught to engineering students. I came to QM after I had studied control systems and practiced it in my job for a number of years and there is a natural feel to QM after this. Now I wonder whether QM might not have influenced the formulation of control systems theory. But basically one has a state space - the linear space of the minimum data one needs to uniquely define the system's future, a Schrödinger like evolution equation and observables that operate on the state and thus gather data for the feedback controller. The interpretation of the observables is radically different from how it's done in QM, however. But "evolving state + measurements" is the summary and even so, uncertainties in the observables leads to whole nontrivial fields of stochastic control systems and robust control systems (those that work even notwithstanding uncertainties in the mathematical models used). The engineering viewpoint also is very experimental - you seek to model your system accurately but you very deliberately don't give a fig how that model arises unless the physics can help you tune a model - but often the problems are so drenched with uncertainty that its just no help at all to probe the physics deeply and indeed control systems theory is about dealing with uncertainty, reacting to it and steering your system on a safe course even though uncertain outside uncontrollable forces buffet it endlessly. There are even shades of the uncertainty principle here: if your state model is uncertain and being estimated (e.g. by a Kalman filter), what your controller does will disturb the system you are trying to measure - although of course this the observer effect and not the Heisenberg principle, one does indeed find oneself trying to minimise the product of two uncertainties. You are wrestling with the tradeoff between the need to act against the need to measure.
This story won't fully motivate the subject in the way you want but it would still be interesting to show that there are a whole group of engineers and mathematicians who think this way and indeed find it very natural and unmysterious even when they first learn it. I think a crucial point here is that no-one frightens students of control theory before they begin with talk of catastrophic failure of theory, the need to wholly reinvent a field of knowledge and intellectual struggles that floored the world's best minds for decades. Of course in physics you have to teach why people went this way, but it's also important to stress that these same great minds who were floored by the subject have smoothed the way for us, so that now we stand on their shoulders and really can see better even though we may be far from their intellectual equals.
share|improve this answer
Classical mechanics is not final theory from the one side and is not further decomposable from the other. So you can't improve it, it is given as is.
For example, you can't explain why if moving body is disappearing from previous point of it's trajectory it should reappear at infinitesimal close point but can't appear a meter ahead (teleporting). What does constraining trajectory points into continuous line? No answer. This is axiom. You can't build a MECHANISM for constraining.
Another example: you can't stop decomposing bodies into parts. You can't reach final elements (particles) and if you do, then you can't explain why these particles are indivisible anymore. The matter should be continuous in classics while you can't imagine how material points exist.
Also, you can't explain, how the entire infinite universe can exist simultaneously in it's whole information. What is happening in absolutely closed box or what is happening in absolute unreachable regions of spacetime? Classics tends us think that reality is real there too. But how it can be if it is absolutely undetectable? Scientific approach says that only what is measurable does exist. So how can it be reality in absolutely closed box (with cat in it)?
In classic mechanic you can't reach absolute identity of building blocks. For example, if all atoms are build of protons, neutrons and electrons, these particles are similar, but not the same. Two electrons in two different atoms are not the same in classic, they are two copies of one prototype, but not the prototype itself. So, you can't define really basic building blocks of reality in classics.
You can't define indeterminism in classics. You can't define unrealized possibilities in classic and can't say what have happened with possibility which was possible but not realized.
You can't define nonlocality in classics. There are only two possibilities in classics: one event affects another (a cause and effect) and two events are independent. You can't imagine two events correlate but don't affect each other! This is possible but unimaginable in classics!
share|improve this answer
Your Answer
|
335b1450dc6b7923 | Thursday, March 19, 2015
The Saga of the Sunstones
In the Dark Ages, the Vikings set out in their longships to slaughter, rape, pillage, and conduct sophisticated measurements in optical physics. That, at least, has been the version of horrible history presented recently by some experimental physicists, who have demonstrated that the complex optical properties of the mineral calcite or Iceland spar can be used to deduce the position of the sun – often a crucial indicator of compass directions – on overcast days or after sunset. The idea has prompted visions of Norse raiders and explorers peering into their “sunstones” to find their way on the open sea.
The trouble is that nearly all historians and archaeologists who study ancient navigation methods reject the idea. Some say that at best the fancy new experiments and calculations prove nothing. Historian Alun Salt, who works for UNESCO’s Astronomy and World Heritage Initiative, calls the recent papers “ahistorical” and doubts that the work will have any effect “on any wider research on navigation or Viking history”. Others argue that the sunstone theory was examined and ruled out years ago anyway. “What really surprises me and other Scandinavian scholars about the recent sunstone research is that it is billed as news”, says Martin Rundkvist, a specialist in the archaeology of early medieval Sweden.
This debate doesn’t just bear on the unresolved question of how the Vikings managed to cross the Atlantic and reach Newfoundland without even a compass to guide them. It also goes to the heart of what experimental science can and can’t contribute to an understanding of the past. Is history best left to historians and archaeologists, or can “outsiders” from the natural sciences have a voice too?
What a saga
The sunstone hypothesis certainly isn’t new. It stems largely from a passage in a thirteenth-century manuscript called St Olaf’s Saga, in which the Icelandic hero Sigurd tells King Olaf II Haraldsson of Norway where the sun is on a cloudy day. Olaf checks Sigurd’s claim using a mysterious sólarsteinn or sunstone:
Olaf grabbed a Sunstone, looked at the sky and saw from where the light came, from which he guessed the position of the invisible Sun.
An even more suggestive reference appears in another thirteenth-century record of a Viking saga, called Hrafns Saga, which gives a few more clues about how the stone was used:
the weather was sick and stormy… The King looked about and saw no blue sky… then the King took the Sunstone and held it up, and then he saw where the Sun beamed from the stone.
In 1967 Danish archaeologist Thorkild Ramskou suggested that this sunstone might have been a mineral such as the aluminosilicate cordierite, which is dichroic: as light passes through, rays of different polarization are transmitted by different amounts, depending on the orientation of its crystal planes (and thus its macroscopic facets) relative to the plane of polarization. This makes cordierite capable of transmitting or blocking polarized rays selectively – which is how normal polarizing filters work. (Ramskou also suggested that the mineral calcite, a form of calcium carbonate, would work as a sunstone, based on the fact that calcite is birefringent: rays with different polarizations are refracted to different degrees depending on the orientation with respect to the crystal planes. But that’s not enough, because calcite is completely transparent: changing its orientation makes no difference to how much polarized light passes through. You need dichroism for this idea to work, not birefringence.)
Because sunlight becomes naturally polarized as it is scattered in the atmosphere, if cordierite is held up to sunlight and rotated it turns darker, becoming most opaque when the crystal planes are at right angles to the direction of the sun’s rays. Even if the sun itself is obscured by mist or clouds and its diffuse light arrives from all directions, the most intense of the polarized rays still come straight from the hidden sun. So if a piece of dichroic mineral is held up to the sky and rotated, the pattern of darkening and lightening can be used to deduce, from the orientation of the crystal’s facets (which reveal the orientation of the planes of atoms), the direction of the sun in the horizontal plane, called its azimuth. If you know the time of day, then this angle can be used to calculate where north lies.
Ramskou pointed out that polarizing materials were once used in a so-called Twilight Compass by Scandinavian air pilots who flew over the north pole. Their ordinary compasses would have been useless then, but the Twilight Compass allowed them to get their bearings from the sun. So maybe the Vikings did the same out on the open sea? Might they have chanced upon this handy property of calcite, found in abundance on Iceland? Perhaps all Viking ships set sail with a sunstone to hand, so that even on overcast or foggy days when the sun wasn’t visible they could still locate it and find their bearings.
The idea has been discussed for years among historians of Viking navigation. But only recently has it been put to the test. In 1994, astronomer Curt Roslund and ophthalmologist Claes Beekman of Gothenburg University showed that the pattern of darkening produced by a dichroic mineral in diffuse sunlight is too weak to give a reliable indication of the sun’s location. They added that such a fancy way to find the hidden sun seems to be unnecessary for navigation anyway, because it’s possible to locate the sun quite accurately with the naked eye when it is behind clouds from the bright edges of the cloud tops and the rays that emanate from behind the cloud. The sunstone idea, they said, “has no scientific basis”.
That was merely the opening sally of a seesawing debate. In 2005, Gabór Horváth at the Loránd Eötvös University in Budapest, a specialist in animal vision, and his colleagues tested subjects using photographs of partly cloudy skies in which the sun was obscured, and found that they couldn’t after all make a reasonably accurate deduction of where the sun was. Two years later Horváth and collaborators measured the amount and patterns of polarization of sunlight in cloudy and foggy skies and concluded that both are after all adequate for the “polarizer” sunstones to work in cloudy skies, but not necessarily in foggy skies. All this seemed enough to rehabilitate the plausibility of the sunstone hypothesis. But would it work in practice?
Double vision
Optical physicists Guy Ropars and Albert Le Floch at the University of Rennes had been working for decades on light polarization effects in lasers. In the 1990s they came across the sunstone idea and the objections of Roslund and Beekman. While Horváth’s studies seemed to show that it wasn’t after all as simple as they had supposed to find the sun behind clouds, Ropars and Le Floch agreed with their concern that the simple darkening of a dichroic crystal due to polarization effects is too weak to do that job either. The two physicists also pointed out that Ramskou’s suggestion of using birefringent calcite this way won’t work. But, they said, calcite has another property that presents a quite different way of using it as a sunstone.
When a calcite crystal is oriented so that a polarized ray strikes at right angles to the main facet of the rhombohedral crystals, but at exactly 45 degrees to the optical axis of the crystal – at the so-called isotropy point – it turns out that the light in the rays at this position are completely depolarized. As a result, it’s possible to find the azimuth of a hidden sun by exploiting the sensitivity of the naked eye to polarized light. When polarized white light falls on our eye’s fovea, we can see a pattern in which two yellowish blobs fan out from a central focus within a bluish background. This pattern, called Haidinger’s brushes, is most easily seen by looking at a white sheet of paper illuminated with white polarized light, and rotating the filter. We can see it too on a patch of blue sky overhead when the sun is near (or below) the horizon by rotating our head. By placing a calcite crystal in the line of the polarized rays oriented to its isotropy point relative to the sun’s azimuth, the polarization is removed and Haidinger’s brushes vanish. Comparing the two views by moving the crystal rapidly in and out of the line of sight, the researchers found that the sun’s azimuth can be estimated to within five degrees.
Haidinger’s brushes: an exaggerated view.
But it’s a rather cumbersome method, relies on there being at least a high patch of unobstructed sky, and would be very tricky on board a pitching ship. There is, however, a better alternative.
Because calcite is birefringent, when a narrow and partially polarized light ray passes through it, the ray is split in two, an effect strikingly evident with laser beams. One ray behaves as it would if just travelling through glass, but the other is deviated by an amount that depends on the thickness of the crystal and the angle of incidence. This is the origin of the characteristic double images seen through birefringent materials. And whereas Roslund and Beekman had argued that changes in brightness for a dichroic substance rotated in dim, partially polarized light are likely to be too faint to distinguish, the contrast between the split-beam intensities as calcite is rotated are much stronger and easier to spot. “The sensitivity of the system is then increased by a factor of about 100”, Ropars explains. At the isotropy point, the two rays will have exactly the same brightness, regardless of how polarized the light is. This means that, if we can accurately judge this position of equal brightness, the orientation of the crystal at that point can again be used to figure out the azimuth from which the most intense rays are coming.
Double images and split laser beams in calcite, due to birefringence.
The human eye happens to be extremely well attuned to comparing brightness contrasts of fairly low-level lighting. So the researchers’ tests using partially polarized light shone through a calcite crystal showed that, under ideal conditions, the direction of the light rays could be estimated to within 1 degree even for low overall light intensities, equivalent to a sun below the horizon at twilight. The method, they say, will work even up to the point where the first stars appear in the sky.
Showing all this is the lab is one thing, but can it be turned into a navigational instrument? Ropars, Albert Le Floch and their coworkers have already made one. They call it the Viking Sunstone Compass.
It’s a rather beautiful wooden cylinder with a hole in the top, through which light falls from the zenith of the sky onto a calcite crystal attached to a rotating pivot turned by a little handle on the lid. There’s a gap in the side through which the observer looks at the two bright spots projected from the crystal. “You simply rotate the crystal to equalize the intensities of the beams”, says Ropars. A pointer on the lid then indicates the orientation of the crystal and the azimuth of the sun, from which north can be deduced by taking into account the time of day. Ropars says that, even though of course the Vikings lacked good chronometers, they seem to have known about sundials. What’s more, studies have shown that people’s internal body clocks (their circadian rhythm) can enable us to estimate the time of day to within about a quarter of an hour.
The Viking Sunstone Compass made by researchers at the University of Rennes. Note the double bright spots in the cavity.
But never mind Vikings – the Rennes team could probably make a mint by marketing these elegant devices as a luxury item for sailors. Ropars says that a US company is now hoping to commercialize the device based on their prototype.
All at sea
When the findings were reported, they spawned a flurry of excited news headlines, many claiming that the mysteries of Viking navigation had finally been solved. It’s not surprising, for the image of brawny Vikings making use of such a brainy method is irresistible. But what, in the end, did the experiments really tell us about history?
There’s nothing in principle that might have prevented the ancient Greeks from developing steam power or microscopes. We are sure that they didn’t because there is absolutely no evidence for it. So an experiment demonstrating that, say, ancient Greek glass-making methods allow one to make the little glass-bead microscope lenses used by Antoni van Leeuwenhoek in the seventeenth century is historically meaningless. What, then, can we conclude about Viking sunstones?
Because the Viking voyages between the ninth and eleventh centuries were so extensive – they sailed to the Caspian Sea, across the Mediterranean to Constantinople, and over the Atlantic to North America – there is a pile of archaeological and historical research on how on earth they did it. The prevailing view is that, in the Dark and Middle Ages, as much sailing as possible was done in sight of land, so that landmarks could guide the way. But of course you can’t cross the Atlantic that way. So if no land was in sight, sailors used environmental signposts: the stars (the Vikings knew how to find north from the Pole Star), the sun and moon, winds and ocean currents. They also relied on the oral reports of previous voyagers to know how long it should take to get to particular places.
What if none of these clues was available? What did they do if becalmed in the open sea on a cloudy day? Well, then they admitted that they were lost – as they put it, hafvilla, “wayward at sea”. The written records indicate that under such circumstances they would convene to discuss the problem, relying on the instincts of the most experienced sailors to set a course.
However, some archaeologists and historians, like Ramskou, have argued that they could also have used navigational instruments. The problem is that there is precious little evidence for it. The Scandinavian coast is dotted with Viking ship finds, some of them wrecks and others buried to hold the dead in graves. But not one has provided any artifacts that could be navigational tools. Nevertheless, the archaeological record is not entirely barren. In 1948 a Viking-age wooden half-disk carved with sun-like serrations was unearthed under the ruins of a monastery at Uunartoq in Greenland. It was interpreted by the archaeologist Carl Sølver as a navigational sundial, an idea endorsed by Ramskou in the 1960s. More recently another apparent wooden sundial was found at the Viking site on the island of Wolin, off the coast of Poland in the Baltic. A rectangular metal object inscribed in Latin, found at Canterbury and tentatively dated to the eleventh century, has also been interpreted as a sundial, while a tenth-century object from Menzlin in Germany might be a nautical weather-vane.
A Viking ship grave at Oseberg in Norway, and the Uunartoq Viking sundial.
So the “instrumental school” of Viking navigation has a few tenuous sources. But no sunstones. That hasn’t previously deterred the theory’s champions. One of them was Leif Karlsen, an amateur historian whose 2003 book Secrets of the Viking Navigators announced his convictions in its subtitle: “How the Vikings used their amazing sunstones and other techniques to cross the open ocean”. One problem with such a bold claim is that the sunstone hypothesis had already been carefully examined in 1975 by the archaeologist Uwe Schnall, who argued that not only is there no evidence for it but there is no clear need either. “Since then, to my knowledge, no research has contradicted this conclusion”, says Willem Mörzer Bruyns, a retired curator of navigation at the Netherlands Maritime Museum in Amsterdam.
In making his case, however, Karlsen presented a new exhibit. In 2002, just as his book was being completed, archaeologists discovered a calcite crystal in the remains of a shipwreck offshore from the Channel Island of Alderney. It has been made misty by centuries of immersion in seawater and abrasion by sand, but it still has the familiar rhombohedral shape. Finally, tangible proof that sailors carried sunstones! Well, not quite. Not only is it totally unknown why the crystal was on board, but the ship is from Elizabethan England, not the Viking age.
The Alderney “sunstone”.
All the same, Ropars and colleagues claim that it supports their theory that these crystals were used for navigation. They point out, for example, that it was found close to a pair of navigational dividers. But, says Bruyns, “navigational instruments were kept in the captain’s and officers’ quarters, where their non-navigational valuables were also stored.” All the same Bruyns is sympathetic to the idea that, rather than being a primary navigational device, the crystal might have been used to correct for compass errors caused by local magnetic variations (such as proximity to iron cannons), which was done at that time by looking at the sun’s position on the horizon when it rose or set. Ropars points out that birds use the same recalibration of their magnetic sensors using polarization of sunlight at sunrise and sunset. “We’re now looking for possible mentions of sunstones in the historical Navy reports of the 15th and 16th centuries”, he says. But however intriguing that idea is, it has no bearing on a possible use of sunstones for navigation in the pre-compass era. “The Alderney finding is from a completely different period and culture to the Vikings”, Ropars acknowledges.
Finding the right questions
One way to view the latest work on sunstones is that it could at least have ruled out the hypothesis in principle. But don’t historians need a good reason to regard a hypothesis as plausible in the first place, before they get concerned about whether it is possible in practice? Otherwise there is surely no end to the options one would need to exclude. And there is the difficult issue of the documentary record. Lots of what went on a millennium and more ago was not written down, and much of what was is now lost. All the same, there is a rich literature, at least from the Middle Ages, of the techniques and skills of trades and professions, while early pioneers of optics like Roger Bacon and Robert Grosseteste in the thirteenth century offer a pretty extensive summary of what was then known on the subject. It’s not easy to see how they would have neglected sunstones, if these were widely used in navigation. Ropars says that the Icelandic sagas aren’t any longer the only textual source for sunstones, for the Icelandic medieval historian Arni Einarsson pointed out in 2010 that sunstones are also mentioned in the inventory lists of some Icelandic monasteries in the fourteenth and fifteenth centuries, where they were apparently used as time-keeping tools for prayer sessions. But monks weren’t sailors.
The basic problem, says Salt, is that scientists dabbling in archaeology often try to answer questions that, from the point of view of history and anthropology, no one is asking. This has been a bugbear of the discipline of archaeoastronomy, for example, in which astronomers and others attempt to provide astronomical explanations of historical records of celestial events, such as darkening of the skies or the appearance of new stars and other portents. Explanations for the Star of Bethelem have been particularly popular, but here too Salt thinks that it is hard to find any examples of a historically interesting question being given a compelling answer. [See, e.g. J. British Astron. Assoc. 114, 336; 2004]. One of the most celebrated examples, also revolving around optical physics, was the suggestion by artist David Hockney and physicist Charles Falco that painters in the Renaissance such as Jan van Eyck used a camera obscura to achieve their incredible realism. The theory is now generally discounted by art historians.
“‘Could the Vikings have used sunstones’ is a different question to ‘did the Vikings use sunstones”, which is what most historians are interested in,” says Salt. “A paper that tackles a historical problem by pretty much ignoring the historical period your artefact comes from seems to me to be eccentric.” Ropars agrees that “experimental science can exclude historical hypotheses, but isn’t sufficient to validate them.” But he is optimistic about the value of collaborations between scientists and historians or archaeologists, when the historical facts are sufficiently clear for the scientists to develop a plausible model of what might have occurred.
Could it be, though, that we’re looking at the sunstone research from the wrong direction? One of its most attractive outcomes is not an answer to a historical question, but a rich mix of mineralogy, optics and human vision that has inspired the invention of a charming device which, using only methods and materials accessible to the ancient world, enables navigation under adverse conditions. It would be rather lovely if the modern “Viking Sunstone Compass” were to be used to cross the Atlantic in a reconstructed Viking ship, as was first done in 1893. It would prove nothing historically, but it would show how speculations about what might have been can stimulate human ingenuity. And maybe that’s enough.
The reconstructed Viking ship the Sea Stallion sets sail.
Further reading
J. B. Friedman & K. M. Figg (eds), Trade, Travel and Exploration in the Middle Ages: An Encyclopedia, from p. 441. Routledge, London, 2000.
A. Englert & A. Trakadas (eds), Wulfstan’s Voyage, from p.206. Viking Ship Museum, Roskilde, 2009.
G. Horváth et al., Phil Trans. R. Soc. B 366, 772 (2011).
G. Ropars, G. Gorre, A. Le Floch, J. Enoch & V. Lakshminarayanan, Proc. R. Soc. A 468, 671 (2011).
A. Le Floch, G. Ropars, J. Lucas, S. Wright, T. Davenport, M. Corfield & M. Harrisson, Proc. R. Soc. A 469, 20120651 (2013).
G. Ropars, V. Lakshminarayanan & A. Le Floch, Contemp. Phys. 55, 302 (2014).
Note: A version of this article appears in New Scientist this week. A pdf of this article is available on my website here.
Wednesday, March 18, 2015
The graphene explosion
I haven’t found any reports of the opening of Cornelia Parker’s new solo show at the Whitworth in Manchester. Did the fireworks go off? Did the detonator work? Here, anyway, is what I wrote for Nature Materials before the event.
If all has gone according to the plan as this piece went to press, Manchester will have been showered with meteorites. An exhibition at the University of Manchester’s Whitworth art gallery by the artist Cornelia Parker is due to be opened on 13th February with a firework display in which pieces of meteoritic iron will be shot into the sky.
The pyrotechnics won’t be started simply by lighting the blue touchpaper. The conflagration will be triggered by a humidity sensor, switched by the breath of physicist Kostya Novoselov, whose work on graphene at Manchester University with Andre Geim won them both the 2010 physics Nobel prize. The sensor is itself made from graphene, obtained from flakes of graphite taken from drawings by William Blake, J. M. W. Turner, John Constable and Pablo Picasso as well as from a pencil-written letter by Ernest Rutherford, whose pioneering work on atomic structure was conducted at Manchester.
That graphene (oxide) can serve as an ultra-sensitive humidity sensor was reported by Bi et al. [1], and has since been refined to give a very rapid response [2]. Adsorption of water onto the graphene oxide film alters its capacitance, providing a sensing mechanism when the film acts as an insulating layer between two electrodes. These sensors are now being developed by Nokia. The devices used for Parker’s show were provided by Novoselov’s group after the two of them were introduced by the Whitworth’s director Maria Balshaw. Novoselov extracted the graphite samples from artworks owned by the galley, using tweezers under careful supervision.
“I love the idea of working on a nano level”, Parker has said. “The idea of graphene, something so small, being a catalyst.” She is not simply talking figuratively: doped graphene has indeed been explored as an electrocatalyst for fuel cells [3,4].
Parker has a strong interest in interacting with science and scientists. In 1997 she produced a series of works for Nature examining unexpected objects in a quasi-scientific context [5]. Much of her work focuses on connotations of materiality, associations arising from what things are made of and the incongruity of materials repurposed or set out of place. Her installation Thirty Pieces of Silver (1988-9) used an assortment of silver objects such as instruments and cutlery flattened by a steamroller. She has worked with the red crepe paper left over from the manufacture of Remembrance Day poppies, with lead bullets and gold teeth extruded into wire, and with her own blood. Perhaps even her most famous work, Cold Dark Matter: An Exploded View (1991) – the reconvened fragments of an exploded shed – was stimulated as much by the allure of the “matter” as by the cosmological allusion.
“I like the garden shed aspect of scientists”, she has said, “the way they like playing about with materials.” Unusually for an artist, she seems more excited by the messy, ad hoc aspects of practical science – the kind of experimentation for which Rutherford was so renowned – than by grand, abstract ideas. The fact that Novoselov and Geim made some of their graphene samples using Scotch tape to strip away layers from graphite no doubt added to its appeal. Parker also recognizes that materials tell stories. There’s a good chance that both Blake and Rutherford would have used graphite from the plumbago mines of Borrowdale in Cumbria, about 80 miles north of Manchester and the source of the Keswick pencil industry. So even Parker’s graphene might be locally sourced.
1. Bi, H. et al., Sci. Rep. 3, 2714 (2013).
2. Borini, S. et al., ACS Nano 7, 11166-11173 (2013).
3. Geng, D. et al., Energy Environ. Sci. 4, 760-764 (2011).
4. Fei, H. et al., ACS Nano 8, 10837-10843 (2014).
5. Anon., Nature 389, 335, 548, 668 (1997).
Friday, March 06, 2015
Alchemy on the page
Here’s an extended version of my article in Chemistry World on the "Books of Secrets" exhibition currently at the Chemical Heritage Foundation in Philadelphia.
You thought your chemistry textbook can be hard to follow sometimes? Consider what a student of chemistry might be faced with in the early seventeenth century:
“Antimony is the true bath of gold. Philosophers call it the examiner and the stilanx. Poets say that in this bath Vulcan washed Phoebus, and purified him from all dirt and imperfection. It is produced from the purest Mercury and Sulphur, under the genus of vitriol, in metallic form and brightness. Some philosophers call it the White Lead of the Wise Men, or simply the Lead…”
This is a small part of the description in The Aurora of the Philosophers, a book attributed to the sixteenth-century Swiss alchemist and physician Paracelsus (1493-1541), for making the “arcanum of Antimony”, apparently a component of the “Red Tincture” or philosopher’s stone, which could transmute base metals into gold. It is, Paracelsus averred, a “very red oil, like the colour of a ruby… with a most fragrant smell and a very sweet taste” (which you could discover at some peril). The book contains very detailed instructions for how to make this stuff – provided that you know what “aquafortis”, “crocus of Mars” and “calcined tutia” are, and that you take care to control the heat of the furnace, in case (the author warns) your glass vessels and perhaps even the furnace itself should shatter.
All this fits the image of the alchemist depicted by Pieter Bruegel the Elder in a print of around 1558, which shows a laboratory in turmoil, littered with paraphernalia and smoky from the fire, where a savant works urgently to make gold while his household descends into disarray all around him. Bruegel’s engraving set the tone for pictures of alchemists at work over the next two centuries or so, in which they were often shown as figures of fun, engaged on a fool’s quest and totally out of touch with the real world.
Pieter Bruegel the Elder, The Alchemist (c.1558)
But that caricature doesn’t quite stand up to scrutiny. For one thing, despite all its arcane language that only fellow adepts would understand, Paracelsus’s experimental procedure is in fact quite carefully recorded: it’s not so different, once you grasp the chemical names and techniques, from something you’d find in textbooks of chemistry four centuries later. The aim – transmutation of metals – might seem misguided from this distance, but there’s nothing so crazy about the methods.
Second, the frenzied experimentation in Brueghel’s picture, in which the deluded alchemist commits his last penny to the crucible, is being directed by a scholar who sits at the back reading a book. (The text is, however, satirical: the scholar points to the words “Alge mist”, a pun on “alchemist” meaning all is failed, and we see the alchemist’s future in the window as he leads his family to the poorhouse.)
Books are ubiquitous in paintings of alchemists, which became a genre in their own right in the seventeenth century. Very often the alchemist is shown consulting a text, and even when he is doing the bellowing and experimenting himself, a book stands open in front of him. Sometimes it’s the act of reading, rather than experimenting, that supplies the satire: in a painting by the Dutch artist Mattheus van Helmont (no relation, apparently, to the famous chemist Jan Baptista van Helmont), the papers tumble from the desk to litter the floor in ridiculous excess. “The use of books and texts in alchemical practice may not be discussed frequently, but it becomes obvious when looking at the actual manuscripts used by alchemists and at the multitude of paintings that depict them”, says Amanda Shields, curator of fine art at the Chemical Heritage Foundation (CHF) in Philadelphia.
After David Teniers the Younger, Alchemist with Book and Crucible (c.1630s)
Mattheus van Helmont, The Alchemist (17th century)
The complex relationship of alchemists to their books is explored in a current exhibition at the CHF called "Books of Secrets: Writing and Reading Alchemy". It was motivated by the Foundation’s recent acquisition of a collection of 12 alchemical manuscripts, mostly from the fifteenth century. They were bought from a dealer after having been auctioned by the Bibliotheca Philosophica Hermetica, a private collection of esoteric books based in Amsterdam and funded by the Dutch businessman Joost Ritman. Among the new acquisitions was one of just six existing complete copies of the highly influential Pretiosa margarita novella (Precious New Pearl) supposedly by the fourteenth-century Italian alchemist Petrus Bonus. The CHF already possessed one of the most substantial collections of paintings of alchemists in the world, mostly from the seventeenth to the nineteenth centuries, and while being keenly aware of the difference between the dates of the books and the paintings, Shields and the CHF’s curator of rare books James Voelkel saw an opportunity to use these two resources to explore what books meant for the alchemists and early chemists: who wrote them, who they were intended for, who actually bought them, and how they were read.
Telling secrets
Of course, there weren’t really any students of chemistry in the early seventeenth century. That discipline didn’t exist for at least another hundred years, and its emergence from alchemy was convoluted and disputed. Arguably the first real textbook of chemistry was Cours de chymie, published by the Frenchman Nicaise Lefebvre in 1660, who would have been identified by the transitional terms chymist or iatrochemist, the latter indicating the use of chemistry in medicine. Alchemy was still very much in the air throughout the seventeenth century: both Robert Boyle and Isaac Newton devoted a great deal of effort to discovering the philosopher’s stone, and neither of them doubted that the transmutation of metals was possible. But it wasn’t by any means all about making gold. In the sixteenth century just about any chemical manipulation, whether to make medicines, pigments and dyes, or simple household substances such as soap, would have been regarded as a kind of alchemy.
This is why the whole notion of an “alchemical literature” is ambiguous. Some writers, such as the late sixteenth-century physician Michael Maier, who directed alchemical experiments in the court of the Holy Roman Emperor Rudolf II in Prague, wrote about the subject in mystical and highly allegorical terms that would have been opaque to a craftsperson. Others, such as the Saxon Georg Bauer (known as Agricola), wrote highly practical manuals such as Agricola’s treatise on mining and metallurgy, De re metallica (1556). Paracelsus’s works, which became popular in the late sixteenth century (he died in 1541), were a mixture of abstruse “chemical philosophy” and straightforward recipes for making drugs and medicines. And aside from such intellectual writers both inside and outside the universities, during the Renaissance there arose a sometimes lucrative tradition of “how to” manuals known as Kunstbüchlein, which were hotch-potch collections of recipes from all manner of sources, including classical encyclopaedists such as Pliny and ill-reputed medieval books of magic. These often styled themselves as “books of secrets”, which of course made them sound very alluring – but often they were miscellanies more likely to give you a mundane recipe for curing toothache than the secret of how to turn lead into gold.
In other words, “secrets” weren’t necessarily about forbidden knowledge at all. According to historian of science William Eamon of New Mexico State University in Las Cruces, “the term was used to describe both trade secrets, in the sense of being concealed, and also “tricks of the trades,” in other words techniques.” Eamon adds that the word “secrets” also “carried a lot of weight owing to the medieval tradition of esoteric knowledge”, which remained prominent in the alchemical tradition of the Renaissance. This glamour meant that the term could be useful for selling books. But how could you allude to secrets while writing them down for all the world to read? Some writers argued that there was virtually a moral imperative to do so. In his introduction to the hugely popular Kunstbüchlein titled simply Secreti (1555), Alessio Piemontese (a pseudonym, probably for the Italian writer Girolamo Ruscelli) told an elaborate and perhaps concocted story of how, by withholding secrets from a physician, he had once been responsible for the death of the physician’s patient.
This tradition of compilations of “secrets” was an old one. The historian of experimental science Lynn Thorndike has suggested that “the most popular book in the Middle Ages” might have been a volume called the Secretum secretorum or “Secret of secrets” (how much more enticing a title could you get?), which has obscure origins probably in the Islamic literature from around the tenth century. It was often attributed to Aristotle, but it’s pretty certain that he never wrote it – as with so many medieval books, the association with a famous name is just a selling point. The book does, however, reflect the Islamic writers’ enthusiasm for Aristotle, and as well as alchemy it includes sections on medicine, astrology, numerology, magic and much else. It was a kind of pocketbook of all that the scholar might want to know – in the words of one historian, a “middle-brow classic for the layman.”
But even if some of these “secrets” seemed hardly worth keeping, alchemy was different – for it really could seem dangerous. If it was possible to make gold, what would that do to the currency and the economy? It was largely this kind of worry, rather than any perception that alchemy was wrong-headed, that gave it a bad reputation. In 1317 Pope John XXII made alchemy illegal and imposed harsh sentences on anyone found guilty of trying to make gold. There was, however, also concern – some of it justified – that alchemists were swindlers who were duping people with fake gold. The image of the alchemist as a trickster who blinded gullible clients with incomprehensible jargon was crystallized in Ben Jonson’s 1610 play The Alchemist, in which his wily charlatan Subtle is a figure of fun. What’s more, alchemy was often associated with religious non-conformism. Paracelsus was unorthodox enough to upset all parties during the Reformation, but he was often linked to the Protestant cause and was sometimes called the “Luther of medicine.” When the French iatrochemists, who adopted Paracelsian ideas, battled with the medical traditionalists in the royal court at the end of the sixteenth century, the dispute was as much about religion – Catholics versus French Protestants (Huguenots) – as it was about medicine.
In view of all this, the genuine alchemist had to tread carefully until at least the seventeenth century. He was vulnerable to suspicion, ridicule and condemnation. That’s one reason why alchemical texts were often written with “intentional obscurity”, according to Voelkel. If you wrote cryptically, you could always argue your way out of accusations that you’d said something heretical or illegal. But the alchemical writers also felt that their knowledge held real power and so should be made unintelligible to lay people. A third motivation will be familiar to anyone who has ever read postmodernist academics: if you wrote too plainly, people might think that what you were saying is trivial, whereas if it was hard to understand then it seems profound and mysterious. Even if the recipes were straightforward, you wouldn’t get far without knowing the “code names” (Decknamen) for chemical substances: that “stinking spirit” is sulphur, and the “grey wolf” or “sordid whore” is stibnite (antimony sulphide), say.
Probably all of these motives for concealment and obfuscation were important to some degree, says Eamon – but he suspects that the major factor in the recondite character of many alchemical books was “to enhance the status and mystery of the work.” Also, he adds, “one shouldn’t underestimate the sheer inertia of tradition: secrecy was a very ancient tradition and always connected with that idea of initiation. Its hold over alchemy was strong even after there was little need for it.” Even Robert Boyle, whose The Sceptical Chymist has often been misinterpreted as a dismissal of all of alchemy rather than just its mystical and cryptic excesses, “employed elaborate coding devices to conceal his recipes”, Eamon says – especially those involved in gold-making. Despite insisting that adepts should be less obscure and cagey, Boyle wasn’t averse to it himself. “He may simply have protecting his reputation”, says Eamon - he didn’t want to be associated with an art many regarded as foolish. Isaac Newton, whose notebooks attest to extensive alchemical experimentation, was similarly guarded about that work.
The alchemist’s library
Given the diversity of sources, what would an alchemist have had in his library? The answer would depend somewhat on the kind of alchemy (or chymistry) they did, says Eamon. “The more practically inclined alchemists would probably have owned few books,” he says, “and they would probably have been heavy on books on metallurgy such as Agricola’s De re metallica and works such as the Kunstbüchlein.” Alchemists who were more interested in gold-making and the more esoteric mysteries of the art “would have been drawn to works such as those of [the pseudonymous] Basil Valentine, one of the more celebrated chemists of the period, such as The Triumphal Chariot of Antimony.” The medieval texts attributed to the Arabic writer Jabir ibn Hayyan (Latinized to Geber) would also have been popular among this sort of alchemist, Eamon adds.
Alchemists who wrote about distillation, such as the Frenchman John of Rupescissa and authors who wrote under the name of the Spanish philosopher Ramon Llull, were popular in the sixteenth century, especially for alchemists mainly interested in medicine. “Works by Paracelsus and his followers would also be represented in the chymist’s library”, says Eamon. “For many alchemists, books of secrets would also have been quite useful, of which the most popular was Alessio Piemontese’s Secreti.”
The English writer John Evelyn claimed of Robert Boyle that he learnt “more from men, real experiments, & in his laboratory… than from books”. But in fact Boyle had a very large library that included many alchemical works. “Unfortunately the library was dispersed after Boyle’s death and no library catalogue exists,” says Eamon, “but historians have been able to identify several of his books from his notes.” These included, for example, Agricola’s De re metallica and works by Johann Glauber, Paracelsus and Daniel Sennert. Newton’s library is much better catalogued, and included well-used copies of Paracelsus’s On the Transmutation of Metals and an English translation of Novum lumen chymicum by the Moravian Paracelsian alchemist Michael Sendivogius.
A dialogue in the lab
The CHF exhibition shows that such alchemical books weren’t at all treated like sacred texts. While they were still hand-copied these books could cost a fortune, but that didn’t mean they were kept in pristine form. They are well thumbed and evidently much used, sometimes showing signs of a benchtop life just as the later paintings imply. One book, a collection of recipes from Italian and English sources dated around 1470-75, has pages begrimed with what looks like soot. When the conservator used by the CHF, Rebecca Smyrl at the Conservation Center for Art and Historic Artifacts in Philadelphia, offered to remove the offending substance, Voelkel implored her not to, for he figured that this might be the debris from an actual experiment.
Cooked in the furnace: are these soot stains in a fifteenth-century alchemical text the debris from use in the lab?
What’s more, the readers scribbled all over the pages. Since paper itself was expensive, you might as well use the original text as your notebook, and margins were left deliberately generous to accommodate the annotations. In a copy of Christophorus Parisiensis’ Opera from 1557 there is not a square centimeter wasted, and the notes are recorded in a neat hand almost to tiny to read without magnification. Readers didn’t just mine the book for information: they engaged in a dialogue with the author, making corrections or arguing about interpretations. “There was a real conversation going on”, says Erin McLeary, director of the CHF museum. These markings attest that the books were anything but status symbols to be filed away ostentatiously on the shelf. “Reading was a huge part of alchemical practice”, says Voelkel.
The pages of a sixteenth-century alchemical book with marginal notes from a reader.
The CHF’s newly acquired manuscripts are particularly revealing because they date from the moment when print culture was emerging. The printing press lowered the financial and practical barriers to book ownership. “It made alchemical books widely available and relatively affordable”, says Eamon. “You can already see the decline of the notion of books as luxury items in the early sixteenth century.” Printing enabled the Kunstbüchlein artisan’s manuals to become bestsellers in the early sixteenth century: “they were cheaply printed, widely translated, and produced in large numbers”, says Eamon. Alessio Piemontese’s Secreti went through over 100 editions, and its likely author Ruscelli seems to have been something of a hack (the polite term was poligrafo) churning out whatever his publisher demanded. Print culture drove the trend of writing books in vernacular languages rather than Latin (which many potential buyers couldn’t read), and this opening up of new audiences was exploited as much by religious dissenters – Martin Luther was one of the first to spot the possibilities – as by publishers of scientific tracts, such as the Aldine Press of the Venetian humanist Aldus Pius Manutius.
The transition is fascinating to see in the CHF’s books. The early typefaces were designed to look like handwritten text, and some of the abbreviations used by scribes, such as the ampersand (&) were carried over to print – in this case with the origin as a stylized Latin et still evident. Some early printed books left a space at the start of chapters for the ornate initial capital letters to be added by hand. Quite often, the owners decided to save on the expense, so that the chapters begin with a blank.
As time passed and alchemy turned into chymistry and then chemistry, the image of the alchemist recorded by the painters became more tolerant and less satirical. In the hands of one of the most prolific and influential artists of this genre, the Antwerp-born David Teniers the Younger (1610-1690), the alchemist is less Breugel’s foolish agent of chaos and more a sober laboratory worker. If his floor is still strewn with vessels of brass, glass and clay, that’s simply because it allows Teniers to show off his skill at painting textures. In The Village Chemist (1760) by Justus Juncker, the physician sits calmly taking notes in his well-lit study-workshop; François-Marius Granet’s The Alchemist (early 19th century) shows a sober, monk-like figure in a spacious, sparsely furnished chamber; and Charles Meer Webb’s The Search for the Alchemical Formula (1858) makes the alchemist a romanticized, Gothic savant.
But what are they all doing? Reading (and writing). The text was always there.
François-Marius Granet, The Alchemist (early 19th century)
Charles Meer Webb, The Search for the Alchemical Formula (1858)
Further reading
W. Eamon, Science and the Secrets of Nature (Princeton University Press, 1996).
L. M. Principe & L. DeWitt, Transmutations: Alchemy in Art (Chemical Heritage Foundation, 2002).
L. M. Principe, The Aspiring Adept (Princeton University Press, 2000).
Friday, February 27, 2015
Mitochondria: who mentioned God?
Oh, they used the G word. The Guardian put “playing God” in the headline of my article today on mitochondrial replacement, and now everyone on the comments thread starts ranting about God. I’m not sure God has had much to say in this debate so far, and it’s a shame to bring him in now. But for the sake of the record, I’ll just add here what I said about this phrase in my book Unnatural. I hope that some of the people talking about naturalness and about concepts of the soul in relation to embryos might be able to take a peek at that book too. So here’s the extract:
“Time and again, the warning sounded by the theocon agenda is that by intervening in procreation we are ‘playing God’. Paul Ramsey made artful play of this notion in his 1970 book Fabricated Man, saying that ‘Men ought not to play God before they learn to be men, and after they have learned to be men they will not play God.’ To the extent that ‘playing God’ is simply a modern synonym for the accusation of hubris, this charge against anthropoeia is clearly very ancient. Like evocations of Frankenstein, the phrase ‘playing God’ is now no more than lazy, clichéd – and secular – shorthand, a way of expressing the vague threat that ‘you’ll be sorry’. It is telling that this notion of the man-making man becoming a god was introduced into the Frankenstein story not by Mary Shelley but by Hollywood. For ‘playing God’ was never itself a serious accusation levelled at the anthropoetic technologists of old – one could tempt God, offend him, trespass on his territory, but it would have been heretical seriously to entertain the idea that a person could be a god. As theologian Ted Peters has pointed out,
“The phrase ‘playing God?’ has very little cognitive value when looked at from the perspective of a theologian. Its primary role is that of a warning, such as the word ‘stop’. In common parlance it has come to mean just that: stop.’”
And yet, Peters adds, ‘although the phrase ‘playing God’ is foreign to theologians and is not likely to appear in a theological glossary, some religious spokespersons employ the idea when referring to genetics.’ It has, in fact, an analogous cognitive role to the word ‘unnatural’: it is a moral judgement that draws strength from hidden reservoirs while relying on these to remain out of sight.”
OK, there you go. Now here’s the pre-edited article.
It was always going to be a controversial technique. Sure, conceiving babies this way could alleviate suffering, but as a Tory peer warned in the Lords debate, “without safeguards and serious study of safeguards, the new technique could imperil the dignity of the human race, threaten the welfare of children, and destroy the sanctity of family life.” Because it involved the destruction of embryos, the Catholic Church inevitably opposed it. Some scientists warned of the dangers of producing “abnormal babies”, there were comparisons with the thalidomide catastrophe and suggestions that the progeny would be infertile. Might this not be just the beginning of a slippery slope towards a “Frankenstein future” of designer babies?
I’m not talking about mitochondrial replacement and so-called “three person babies”, but about the early days of IVF in the 1970s and 80s, when governments dithered about how to deal with this new reproductive technology. Today, with more than five million people having been conceived by IVF, the term “test-tube baby” seems archaic if not a little perverse (not least because test tubes were never involved). What that debate about assisted conception led to was not the breakup of the family and the birth of babies with deformities, but the formation of the HFEA in the Human Fertilisation and Embryology Act of 1990, providing a clear regulatory framework in the UK for research involving human embryos.
It would be unscientific to argue that, because things turned out fine on that occasion, they will inevitably do so for mitochondrial replacement. No one can be wholly certain what the biological consequences of this technique will be, which is why the HFEA will grant licenses to use it only on the careful worded condition that they are deemed “not unsafe”. But the parallels in the tone of the debate then and now are a reminder of the deep-rooted fears that technological intervention in procreation seems to awaken.
Scientists supportive of such innovations often complain that the opponents are motivated by ignorance and prejudice. They are right to conclude that public engagement is important – in a poll on artificial insemination in 1969, the proportion of people who approved almost doubled when they were informed about the prospects for treating infertility rather than just being given a technical account. But they shouldn’t suppose that science will banish these misgivings. They resurface every time there is a significant advance in reproductive technology: with pre-implantation genetic diagnosis, with the ICSI variant of IVF and so on. They will undoubtedly do so again.
In all these cases, much of the opposition came from people with a strong religious faith. As one of the versions of mitochondrial replacement involves the destruction of embryos, it was bound to fall foul of Catholic doctrine. But rather little was made of that elsewhere, perhaps an acknowledgement that in terms of UK regulation that battle was lost some time ago. (In Italy and the US, say, it is a very different story.) The Archbishops’ Council of the Church of England, for example, stressed that it was worried about the safety and ethical aspects of the technique: the Bishop of Swindon and the C of E’s national adviser for medical ethics warned of “unknown interactions between the DNA in the mitochondria and the DNA in the nucleus [that] might potentially cause abnormality or be found to influence significant personal qualities or characteristics.” Safety is of course paramount in the decision, but the scientific assessments have naturally given it a great deal of attention already.
Lord Deben, who led opposition to the bill in the Lords, addressed this matter head on by denying that his Catholicism had anything to do with it. “I hope no one will say that I am putting this case for any reason other than the one that I put forward,” he said. We can take it on trust that this is what he believes, while finding it surprising that the clear and compelling responses to some of his concerns offered by scientific peers such as Matt Ridley and Robert Winston left him unmoved.
Can it really be coincidental, though, that the many of the peers speaking against the bill are known to have strong religious convictions? Certainly, there are secular voices opposing the technology too, in particular campaigners against genetic manipulations in general such as Marcy Darnovsky of the Center for Genetics and Society, who responded to the ongoing deliberations of the US Food and Drug Administration over mitochondrial transfer not only by flagging up alleged safety issues but also insisting that we consider babies conceived this way to be “genetically modified”, and warning of “mission creep” and “high-tech eugenics”. “How far will we go in our efforts to engineer humans?” she asked in the New York Times.
Parallels between these objections from religious and secular quarters suggest that they reflect a deeper and largely unarticulated sense of unease. We are unlikely to progress beyond the polarization into technological boosterism or conservative Luddites and theologians unless we can get to the core of the matter – which is evidently not scriptural, the Bible being somewhat silent about biotechnological ethics.
Bioethicist Leon Kass, who led the George W. Bush administration’s Council on Bioethics when in 2001 it blocked public funding of most stem-cell research, has argued that instinctive disquiet about some advances in assisted conception and human biotechnology is “the emotional expression of deep wisdom, beyond reason’s power fully to articulate it”: an idea he calls the wisdom of repugnance. “Shallow are the souls”, he says, “that have forgotten how to shudder.” I strongly suspect that, beneath many of the arguments about the safety and legality of mitochondrial replacement lies an instinctive repugnance that is beyond reason’s power to articulate.
The problem, of course, is that what one person recoils from, another sees as a valuable opportunity for human well-being. Yet what are these feelings really about?
Like many of our subconscious fears, they are revealed in the stories we tell. Disquiet at the artificial intervention in procreation goes back a long way: to the tales of Prometheus, of the medieval homunculus and golem, and then to Goethe’s Faust and Shelley’s Victor Frankenstein, E.T.A. Hoffmann’s automaton Olympia, the Hatcheries of Brave New World, modern stories of clones and Ex Machina’s Ava. On the surface these stories seem to interrogate humankind’s hubris in trying to do God’s work; so often they turn out on closer inspection to explore more intimate questions of, say, parenthood and identity. They do the universal job of myth, creating an “other” not as a cautionary warning but in order more safely to examine ourselves. So, for example, when we hear that a man raising a daughter cloned from his wife’s cells (not, I admit, an unproblematic scenario) will be irresistibly attracted to her, we are really hearing about our own horror of incestuous fantasies. Only in Hollywood does Frankenstein’s monster turn bad because he is tainted from the outset by his origins; for Shelley, it is a failure of parenting.
I don’t think it is reading too much into the “three-parent baby” label to see it as a reflection of the same anxieties. Many children already have three effective parents, or more - through step-parents, same-sex relationships, adoption and so forth. When applied to mitochondrial transfer, this term shows how strongly personhood has become equated now with genetics, and indicates to geneticists that they have some work to move the public on from the strictly deterministic view of genetics that the early rhetoric of the field unwittingly fostered.
We can feel justifiably proud that the UK has been the first country to grapple with the issues raised by this new technology. It has shown already that embracing reproductive technologies can be the exact opposite of a slippery slope: what IVF led to was not a Brave New World of designer babies, but a clear regulatory framework that is capable of being permissive and casuistic, not bound by outmoded principles. The UK is not alone in declining to prohibit the technique, but it is right to have made that decision actively.
It is also right that that decision canvassed a wide range of opinions. Some scientists have questioned why religious leaders should be granted any special status in pronouncing on ethics. But the most thoughtful of them often turn out to have a subtle and humane moral sensibility of the kind that faith should require. There is a well-developed strand of philosophical thought on the moral authority of nature, and theology is a part of it. But on questions like this, we have a responsibility to examine our own responses as honestly as we can.
Monday, February 23, 2015
Why dogs aren't enough in Many Worlds
I'm very glad some folks are finding this exchange on Many Worlds instructive. That was really all I wanted: to get a proper discussion of these issues going. The tone that Sean Carroll found “snide and aggressive” was intended as polemical: it’s just a rhetorical style, you know? What I certainly wanted to avoid (forgive me if I didn’t) was any name-calling or implications of stupidity, fraud, chicanery etc. (It doesn’t surprise me that some of the responses failed to do the same.) My experience has been that it is necessary to light a fire under the MWI in order to get a response at all. Indeed, even then it is proving very difficult to keep the feedback to the point and not get led astray by red herrings. For example, Sean made a big point of saying:
“The people who object to MWI because of all those unobservable worlds aren’t really objecting to MWI at all; they just don’t like and/or understand quantum mechanics.”
I’m genuinely unsure if this is supposed to be referring to me. Since I said in my article
“Certainly, to say that the world(s) surely can’t be that weird is no objection at all”
then I kind of assume it isn’t – so I’m not sure why he brings the point up. I even went to the trouble of trying explicitly to ward off attempts to dismiss my arguments that way:
“Many Worlders harp on about this complaint precisely because it is so easily dismissed.”
But what Sean said next seems to get (albeit obliquely) to the heart of the matter:
“Hilbert space is big, regardless of one’s personal feelings on the matter.”
Whatever these arguments are about, they are surely not about what Hilbert space looks like, since Hilbert space is a mathematical construct – that is simply true by definition, and there is no argument about it. The argument is about what ontological status we ascribe to the state vectors that appear in Hilbert space. I do see the MW reasoning here: the reality we currently experience corresponds to a state vector in Hilbert space, and so why do we have any grounds for denying reality to the other states into which it can evolve by smooth unitary transformation? The problem, of course, is that a single state in quantum mechanics can evolve into multiple states. Yet if we are going to exclude any of those from having objective reality, we surely must have some criterion for doing so. Absent that, we have the MWI. I do understand that reasoning.
So it seems that the arguments could be put like this: is it an additional axiom to say “All states in Hilbert space accessible from an initial one that describes our real world are also describing real worlds” – or is it not? To objectors, it is, and a very expensive one at that. To MWers, it is merely what we do for all theories. “Give us one good reason why it shouldn’t apply here”, they say.
It’s a fair point. One objection, which has nothing whatsoever to do with the vastness of Hilbert space, is to say, well, no one has seriously posited such a vast number of multiple and in some sense “parallel” (initially) worlds before, so it seems fair to ask you to work a bit harder, since don’t we in science say that extraordinary claims require extraordinary evidence?* Might we not ask you to work a bit harder in this particular case to establish the relationship between what the formalism says and what exists in physical reality? After all, whether or not we admit all accessible states in Hilbert space a physical reality, we seem to get identical observational consequences. So right now, the only way we can choose between them is philosophically. And we don’t usually regard philosophy as the final arbiter in science.
*For example, Sean emphasizes that the many worlds are a prediction, not a postulate of the theory. But most other theories (all others?) can tell us some specific things that they don’t predict too about what we will see happen. But I’m not clear if the MWI can rule out any particular thing actually coming to pass that is consistent with the laws of physics. For example, the Copenhagen interpretation (just to take an example) can exclude the “prediction” that human life came to an end following a nuclear conflict sparked by the Bay of Pigs incident. Correct me if I am wrong, but the MWI cannot rule out this “prediction”. It cannot rule out the “prediction” that Many Worlders were never bothered by this irritating science writer. Even if MWI does not exactly say “everything happens”, can it tell us there is anything in particular (consistent with the laws of physics) that does not?
So up to this point, I can appreciate both points of view. What makes me uncomfortable is that the MWers seem so determined to pretend that what they are telling us is actually not so remarkable after all. What’s so surprising, they ask, about the idea that you can instantly duplicate a consciousness, again and again and again? What is frustrating is the blithe insistence that we should believe this, I suspect the most extraordinary claim that science has ever made, on the basis simply of Occam’s (fallible) razor. This is not, do please note, at all the same as worrying about “too many worlds”.
Still, who cares about my discomfort, right? But I wanted to suggest that it’s not just a matter of whether we are prepared to accept this extraordinary possibility. We need to acknowledge that it is rather more complicated than coming to terms with a cute gaggle of sci-fi Doppelgängers. This is not about whether or not people are “all that different from atoms”. It is about whether what people say can be ascribed a coherent meaning. Those responses that have acknowledged this point at all have tended to say “Oh who cares about selfhood and agency? How absurd to expect the theory to deal with unplumbed mysteries like that!” To which I would say that interpretations of quantum theory that don’t have multiple physical worlds don’t even have to think about dealing with them. So perhaps even that Ocaam’s razor argument is more complicated than you think.
It’s been instructive to see that the MWI is something of a hydra: there are several versions, or at least several views on it. Some say that the “worlds” bit is itself a red herring, a bit of gratuitous sci-fi that we could do without. Others insist that the worlds must be actual: Sean says that people must be copied, and that only makes any kind of sense if the world is copied around them. Some say that invoking problems with personhood is irrelevant since Many Worlds would be true anyway even without people in it. (The inconvenience with this argument is that there are people in it.) Sean, interestingly, says that copying people is not only real but essential, “for deriving the Born rule” in MWI. This is a pointer to his fascinating paper on “self-locating uncertainty”. Here he and Charles Sebens points out that, in the MWI where branch states are rendered distinct and non-interacting by decoherence, the finite time required for an observer to register which branch she is on means that there is a tiny but inescapable interval during which she exists as two identical copies but doesn’t know which one she is. In this case, Carool and Sebens argue, the rational way to “apportion credence to the different possibilities” is to use the Born rule, which allows us to calculate from the wavefunction the likelihood of finding a particular result when we make a measurement. This, they say, is why probability seems to come into the situation at all, given that the MWI says that everything that can happen does happen with 100% probability.
This sounds completely bizarre: a rule of quantum physics works because of us? But I think I can see how it makes sense. The universe doesn’t care about the Born rule: it’s not forever calculating “probabilities”. Rather, the Born rule is only needed in our mathematical theory of quantum phenomena – and this argument offers an explanation of why it works when it is put there. Now, there is a bit of heavy pulling still to do in order to get from a “rational way to make predictions while we are caught in that brief instant after the universe has split but before we have been able to determine which branch we are in” and a component of the theory that we use routinely even while we are not agreed that this situation arises in the first place. I’m still not clear how that bit works. Neither is it fully clear to me how we are ever really in that limbo between the universe splitting and us knowing which branch we took, given that, in one view of the Many Worlds at least, the universe has split countless times again during that interval. Maybe the answer would be that all those subsequent split produce versions that are identical with respect to the initial “experiment”, unless they involve processes that interact with the “experiment” and so are part of it anyway. I don’t know.
I do think I can see the answer to my question to Sean (not meant flippantly) of whether it has to be humans who split in order to get the Born rule, and not merely dogs. The answer, I think, is that dogs won’t do because dogs don’t do quantum mechanics. What seems weird is that we’re then left with an aspect of quantum theory that, in this argument, is the way it is not because of some fundamental underlying physical reason so much as because we asked the question in the first place. It feels a bit like Einstein’s moon: was the Born rule true before we invented quantum theory? Or to put it another way, how is consciousness having this agency without appearing explicitly anywhere in the theory? I’m not advancing these as critiques, just saying it seems odd. I’m happy to believe that, within the MWI, the logic of this derivation of the Born rule is sound.
But doesn’t that mean that deriving the Born rule, a longstanding problem in QM, is evidence for the MWI? Sadly not. There are purported derivations within the other interpretations too. None is universally accepted.
The wider point is that, if this is Sean’s reason for insisting we include dividing people in MWI, then the questions about identity raised in my article stand. You know, perhaps they really are trivial? But no one seems to want to say why. This refusal to confront the apparent logical absurdities and contradictions of a theory which predicts that “everything” really happens is curious. It feels as though the MWers find something improper about it – as though this is not quite the respectable business for a physicist who should be contemplating rates of decoherence and the emergence of pointer states and so on. But if you insist on a theory like this, you’re stuck with all its implications – unless, that is, you have some means of “disappearing worlds” that scramble the ability to make meaningful statements about anything.
Saturday, February 21, 2015
Many Worlds: can we make a deal?
OK, picking up from my last post, I think I see a way whereby we can leave this. Advocates of the Many World Interpretation will agree that it does not pretend to say anything about humans and stuff, and that expecting it to do so is as absurd as expecting someone to write down and solve the Schrödinger equation for a football game. They will agree that all those popular (and sometimes technical) books and articles telling us about our alternative quantum selves and Many-Worlds morality and so forth, are just the wilder speculative fringes of the theory that struggle with problems of logical coherence. They agree that statements like DeWitt’s that “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies” aren’t actually what the theory says at all. They acknowledge a bit more clearly that the Alices and Bobs in their papers are just representations of devices that can make an observation (yes, I know this is all they have ever been intended as anyway.) They agree that when they say “The world is described by a quantum state”, they are using “world” in quite a special sense that makes no particular claims about our place(s) or even our existence(s) in it*. They admit that if one tries to broaden this sense of “world”, some difficult conundrums arise. They admit that the mathematical and ontological status of these “worlds” are not the same thing, and that the difference is not resolved by saying that the “worlds” are “really” there in Hilbert space, waiting to be realized.
Then – then – I’m happy to say, sure, the Many Worlds Interpretation, which yes indeed we might better relabel the Everettian Interpretation (shall we begin now?), is a coherent way to think about quantum theory. Possibly even a default way, though I shall want to seek advice on that.
Is that a deal?
*I submit that most physicists and chemists, if they write down the Schrödinger equation for, say, a molecular orbital, are not thinking that they are actually writing down the equation for a “world” but with some bits omitted. One might respond “Well, they should, unless they are content to be “shut up and calculate” scientists”. But I would submit that they are just being good scientists in recognizing the boundaries of the system their equations describe and are not trying to make claims about things they don’t know about or understand.
Friday, February 20, 2015
The latest on the huge number of unobservable worlds
OK, I get the point. Sean Carroll really doesn’t care about problems of the ontology of personhood in the Many World Interpretation. I figured that, as a physicist, these would not be at the forefront of his mind, which is fair enough. But philosophically they are valid questions – which is why David Lewis thought a fair bit about them in his Model Realism theory. It seems to me that a supposedly scientific theory that walks up and says “Sorry, but you are not you – I can’t say what it is you are, but it’s not what you think you are” is obliged to take questions afterwards. I wrote my article in Aeon to try to get those questions, so determinedly overlooked in many expositions of Many Worlds (though clearly acknowledged, if not really addressed, by one of its thoughtful proponents Lev Vaidman) on the table.
But no. We’re not having that, apparently. Sean Carroll’s response doesn’t even mention them. Perhaps he feels as Chad Orzel does: “Who cares? All that stuff is just a collection of foggily defined emergent phenomena that arising from vast numbers of simple quantum systems. Absent a concrete definition, and most importantly a solid idea of how you would measure any of these things, any argument about theories of mind and selfhood and all that stuff is inescapably incoherent.” I’m sort of hoping that isn’t the case. I’m hoping that when Carroll writes of an experiment on a spin superposition being measured by Alice, “There's a version of Alice who saw up and a version who saw down”, he doesn’t really think we can treat Alice – I mean real-world Alices, not the placeholder for a measuring device – like a CCD camera. It’s the business of physics to simplify, but we know what Einstein said about that.
All he picks up on is the objection that I explicitly call minor in comparison: the matter of testing the MWI. His response baffles me:
"The MWI does not postulate a huge number of unobservable worlds, misleading name notwithstanding. (One reason many of us like to call it “Everettian Quantum Mechanics” instead of “Many-Worlds.”) Now, MWI certainly does predict the existence of a huge number of unobservable worlds. But it doesn’t postulate them. It derives them, from what it does postulate."
(I don’t quite get the discomfort with the “Many Worlds” label. It seems to me that is a reasonable name for a theory that “predicts the existence of a huge number of unobservable worlds.” Still, call it what you will.)
I’m missing something here. By and large, scientific theories make predictions, and then we do experiments to see if those predictions are right. MWI predicts “a huge number of worlds”, but apparently it is unreasonable to ask if we might examine that prediction in the laboratory.
But, Carroll says, “You don’t hold it against a theory if it makes some predictions that can’t be tested. Every theory does that. You don’t object to general relativity because you can’t be absolutely sure that Einstein’s equation was holding true at some particular event a billion light years away.” The latter is a non-sequitur: accepting a prediction that can’t be tested is not the same as accepting the possibility of exceptions. And you might reasonably say that there is a difference between accepting a theory even if you can’t get experimentally at what it implies in some obscure corner of parameter space and accepting a theory that “predicts a huge number of unobservable worlds”, some populated by other versions of you doing unobservable things. But OK, might we then have just one prediction that we can test please?
I was dissatisfied with Carroll’s earlier suggestion that you can test MWI just by finding a system that violates the Schrödinger equation or the principle of superposition, because, as I pointed out, it is not a unique interpretation of quantum theory in that regard. His response? “So what?” Alternatives to MWI, he says, have to add to its postulates (or change them), and so they too should predict something we can test. And some do. I understand that Carroll thinks the MWI is uniquely exempt from having to defend its interpretation in particular in the experimental arena, because its axioms are the minimal ones. The point I wanted to raise in my article, though, was that the wider implications of the MWI make it less minimal than its advocates claim. If a “minimal” physical theory predicted something that seemed nonsensical about how cells work, but a more complex theory with an experimentally unsupported postulate took away that problem, would we be right to assert that the minimal theory must be right until there was some evidence for that other postulate? Of course, there may be a good argument for why trashing any coherent notion of self and identity and agency is not a problem. I’d love to hear it. I’d rather it wasn’t just ignored.
“Those worlds happen automatically” – sure, I see that. They are a prediction – sure, I see that. But this point-blank refusal to think any more about them? I don’t get that. Perhaps if Many Worlders were to stop, just stop, trying to tell us anything about how those many unobservable worlds are peopled, to stop invoking copies of Alice as placeholders for quantum measurements, to stop talking about quantum brothers, to say simply that they don’t really have a clue what their interpretation can mean for our notions of identity, then I would rest easier. And so would many, many other physicists. That, I think, would make them a lot happier than being told they don’t understand quantum theory or that they are being silly.
I’m concerned that this sounds like a shot at Sean Carroll. I really don’t want that. Not only is he a lot smarter than me, but he writes so damned well on such intensely interesting stuff. I’m not saying that to flatter him away. I just wanted to get these things discussed.
Many Worlds - a longer view
Here is the pre-edited version of my article for Aeon on the Many Worlds Interpretation of quantum theory. I’m putting it here not because it is any better than the published version (Aeon’s editing was as excellent and improving as ever), but because it gives me a bit more room to go into some of the issues.
In my article I stood up for philosophy. But that doesn’t mean philosophers necessarily get it right either. In the ensuing discussion I have been directed to a talk by philosopher of science David Wallace. Here he criticizes the Copenhagen view that theories are there to make predictions, not to tell us how the world works. He gets a laugh from his audience for suggesting that, if this were so, scientists would have been forced to ask for funding for the LHC not because of what we’d learn from it but so that we could test the predictions made for it.
This is wrong on so many levels. Contrasting “finding out about the world” against “testing predictions of theories” is a totally false opposition. We obviously test predictions of theories to find out if they do a good job of helping us to explain and understand the world. The hope is that the theories, which are obviously idealizations, will get better and better at predicting the fine details of what we see around us, and thereby enable us to tell ever more complete and satisfying stories about why things are this way (and, of course, to allow us to do some useful stuff for “the relief of man’s estate). So there is a sense in which the justification for the LHC derided by Wallace is in fact completely the right one, although that would have been a very poor way of putting it. Almost no one in science (give or take the [very] odd Nobel laureate who capitalizes Truth like some religious crank) talks about “truth” – they recognize that our theories are simply meant to be good working descriptions of what we see, with predictive value. That makes them “true” not in some eternal Platonic sense but as ways of explaining the world that have more validity than the alternatives. No one considers Newtonian mechanics to be “untrue” because of general relativity. So in this regard, Wallace’s attack on the Copenhagen view is trivial. (I don’t doubt that he could put the case better – it’s just that he didn’t do so here.)
What I really object to is the idea, which Wallace repeats, that Many Worlds is simply “what the theory tells you”. To my mind, a theory tells you something if it predicts the corresponding states – say, the electrical current flowing through a circuit, or the reaction rate of an enzymatic process. Wallace asserts that quantum theory “predicts” a you seeing a live Schrödinger’s cat and a you seeing a dead one. I say, show me the equation where those “yous” appear (along with the universes they are in). The best the MWers can do is to say, well, let’s just denote those things as Ψ(live cat) and Ψ(dead cat), with Ψ representing the corresponding universes. Oh please.
Some objectors to my article have been keen to insist that the MWI really isn’t that bizarre: that the other “yous” don’t do peculiar things but are pretty much just like the you-you. I can see how some, indeed many, of them would be. But there is nothing to exclude those that are not, unless you do so by hand: “Oh, the mind doesn’t work that way, they are still rational beings.” What extraordinary confidence this shows in our ability to understand the rules governing human behaviour and consciousness in more parallel worlds than we can possibly imagine: as if the very laws of physics will make sure we behave properly. Collapsing the wavefunction seems a fairly minor sleight of hand (and moreover one we can actually continue to investigate) compared to that. The truth is that we no nothing about the full range of possibilities that the MWI insists on, and nor can we ever do so.
One of the comments underneath my article – and others will doubtless repeat this – makes the remark that Many Worlds is not really about “many universes branching off” at all. Well, I guess you could choose to believe Anonymous Pete instead of Brian Greene and Max Tegmark, if you wish. Or you could follow his link to Sean Carroll’s article, which is one of the examples I cite in my piece of why MWers simple evade the “self” issue altogether.
But you know, my real motivation for writing my article is not to try to bury the MWI (the day I start imagining I am capable of such things, intellectually or otherwise, is the day to put me out to grass), but to provoke its supporters into actually addressing these issues rather than blithely ignoring them while bleating about the (undoubted) problems with the alternatives. Who knows if it will work.
In 2011, participants at a conference on the placid shore of Lake Traunsee in Austria were polled on what the conference was about. You might imagine that this question would have been settled before the meeting was convened – but since the subject was quantum theory, it’s not surprising that there was still much uncertainty. The conference was called “Quantum Physics and the Nature of Reality”, and it grappled with what the theory actually means. The poll, completed by 33 of the participating physicists, mathematicians and philosophers, posed a range of unresolved questions, one of which was “What is your favourite interpretation of quantum mechanics?”
The mere question speaks volumes. Isn’t science supposed to be decided by experiment and observation, free from personal preferences? But experiments in quantum physics have been obstinately silent on what it means. All we can do is develop hunches, intuitions and, yes, favourite ideas.
Which interpretations did these experts favour? There were no fewer than 11 answers to choose from (as well as “other” and “none”). The most popular (42%) was the view put forward by Niels Bohr, Werner Heisenberg and their colleagues in the early days of quantum theory, now known as the Copenhagen Interpretation. In third place (18%) was the Many Worlds Interpretation (MWI).
You might not have heard of most of the alternatives, such as Quantum Bayesianism, Relational Quantum Mechanics, and Objective Collapse (which is not, as you might suppose, saying “what the hell”). Maybe you’ve not heard of the Copenhagen Interpretation either. But the MWI is the one with all the glamour and publicity. Why? Because it tells us that we have multiple selves, living other lives in other universes, quite possibly doing all the things that we dream of but will never achieve (or never dare). Who could resist that idea?
Yet you should. You should resist it not because it is unlikely to be true, or even because, since no one knows how to test it, the idea is not truly scientific at all. Those are valid criticisms, but the main reason you should resist it is that it is not a coherent idea, philosophically or logically. There could be no better contender for Wolfgang Pauli’s famous put-down: it is not even wrong.
Or to put it another way: the MWI is a triumph of canny marketing. That’s not some wicked ploy: no one stands to gain from its success. Rather, its adherents are like giddy lovers, blinded to the flaws beneath the superficial allure.
The measurement problem
To understand how this could happen, we need to see why, more than a hundred years after quantum theory was first conceived, experts are still gathering to debate what it means. Despite such apparently shaky foundations, it is extraordinarily successful. In fact you’d be hard pushed to find a more successful scientific theory. It can predict all kinds of phenomena with amazing precision, from the colours of grass and sky to the transparency of glass, the way enzymes work and how the sun shines.
This is because quantum mechanics, the mathematical formulation of the theory, is largely a technique: a set of procedures for calculating what properties substances have based on the positions and energies of their constituent subatomic particles. The calculations are hard, and for anything more complicated than a hydrogen atom it’s necessary to make simplifications and approximations. But we can do that very reliably. The vast majority of physicists, chemists and engineers who use quantum theory today don’t need to go to conferences on the “nature of reality” – they can do their job perfectly well if, in the famous words of physicist David Mermin, they “shut up and calculate”, and don’t think too hard about what the equations mean.
It’s true that the equations seem to insist on some strange things. They imply that very small entities like atoms and subatomic particles can be in several places at the same time. A single electron can seem to pass through two holes at once, interfering with its own motion as if it was a wave. What’s more, we can’t know everything about a particle at the same time: Heisenberg’s uncertainty principle forbids such perfect knowledge. And two particles can seem to affect one another instantly across immense tracts of space, in apparent (but not actual) violation of Einstein’s theory of special relativity.
But quantum scientists just accept such things. What really divides opinion is that quantum theory seems to do away with the notion, central to science from its beginnings, of an objective reality that we can study “from the outside”, as it were. Quantum mechanics insists that we can’t make a measurement without influencing what we measure. This isn’t a problem of acute sensitivity; it’s more fundamental than that. The most widespread form of quantum maths, devised by Erwin Schrodinger in the 1920s, describes a quantum entity using an abstract concept called a wavefunction. The wavefunction expresses all that can be known about the object. But a wavefunction doesn’t tell you what properties the object has; rather, it enumerates all the possible properties it could have, along with their relative probabilities.
Which of these possibilities is real? Is an electron here or there? Is Schrödinger’s cat alive or dead? We can find out by looking – but quantum mechanics seems to be telling us that the very act of looking forces the universe to make that decision, at random. Before we looked, there were only probabilities.
The Copenhagen Interpretation insists that that’s all there is to it. To ask what state a quantum entity is in before we looked is meaningless. That was what provoked Einstein to complain about God playing dice. He couldn’t abandon the belief that quantum objects, like larger ones we can see and touch, have well defined properties at all times, even if we don’t know what they are. We believe that a cricket ball is red even if we don’t look at it; surely electrons should be no different? This “measurement problem” is at the root of the arguments.
Avoiding the collapse
The way the problem is conventionally expressed is to say that measurement – which really means any interaction of a particle with another system that could be used to deduce its state – “collapses” the wavefunction, extracting a single outcome from the range of probabilities that the wavefunction encodes. But the quantum mechanics offers no prescription for how this collapse occurs; it has to be put in by hand. That’s highly unsatisfactory.
There are various ways of looking at this. A Copenhagenist view might be simply to accept that wavefunction collapse is an additional ingredient of the theory, which we don’t understand. Another view is to suppose that wavefunction collapse isn’t just a mathematical sleight-of-hand but an actual, physical process, a little like radioactive decay of an atom, which could in principle be observed if only we had an experimental technique fast and sensitive enough. That’s the Objective Collapse interpretation, and among its advocates is Roger Penrose, who suspects that the collapse process might involve gravity.
Proponents of the Many Worlds Interpretation are oddly reluctant to admit that their preferred view is simply another option. They often like to insist that There Is No Alternative – that the MWI is the only way of taking quantum theory seriously. It’s surprising, then, that in fact Many Worlders don’t even take their own view seriously enough.
That view was presented in the 1957 doctoral thesis of the American physicist Hugh Everett. He asked why, instead of fretting about the cumbersome nature of wavefunction collapse, we don’t just do away with it. What if this collapse is just an illusion, and all the possibilities announced in the wavefunction have a physical reality? Perhaps when we make a measurement we only see one of those realities, yet the others have a separate existence too.
An existence where? This is where the many worlds come in. Everett himself never used that term, but his proposal was championed in the 1970s by the physicist Bryce De Witt, who argued that the alternative outcomes of the experiment must exist in a parallel reality: another world. You measure the path of an electron, and in this world it seems to go this way, but in another world it went that way.
That requires a parallel, identical apparatus for the electron to traverse. More, it requires a parallel you to measure it. Once begun, this process of fabrication has no end: you have to build an entire parallel universe around that one electron, identical in all respects except where the electron went. You avoid the complication of wavefunction collapse, but at the expense of making another universe. The theory doesn’t exactly predict the other universe in the way that scientific theories usually make predictions. It’s just a deduction from the hypothesis that the other electron path is real too.
This picture really gets extravagant when you appreciate what a measurement is. In one view, any interaction between one quantum entity and another – a photon of light bouncing off an atom – can produce alternative outcomes, and so demands parallel universes. As DeWitt put it, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies”.
Recall that this profusion is deemed necessary only because we don’t yet understand wavefunction collapse. It’s a way of avoiding the mathematical ungainliness of that lacuna. “If you prefer a simple and purely mathematical theory, then you – like me – are stuck with the many-worlds interpretation,” claims MIT physicist Max Tegmark, one of the most prominent MWI popularizers. That would be easier to swallow if the “mathematical simplicity” were not so cheaply bought. The corollary of Everett’s proposal is that there is in fact just a single wavefunction for the entire universe. The “simple maths” comes from representing this universal wavefunction as a symbol Ψ: allegedly a complete description of everything that is or ever was, including the stuff we don’t yet understand. You might sense some issues being swept under the carpet here.
What about us?
But let’s stick with it. What are these parallel worlds like? This hinges on what exactly the “experiments” that produce or differentiate them are. So you’d think that the Many Worlders would take care to get that straight. But they’re oddly evasive, or maybe just relaxed, about it. Even one of the theory’s most thoughtful supporters, Russian-Israeli physicist Lev Vaidman, seems to dodge the issue in his entry on the MWI in the Stanford Encyclopedia of Philosophy:
“Quantum experiments take place everywhere and very often, not just in physics laboratories: even the irregular blinking of an old fluorescent bulb is a quantum experiment.”
Vaidman stresses that every world has to be formally accessible from the others: it has to be derived from one of the alternatives encoded in the wavefunction of one of the particles. You could say that the universes are in this sense all connected, like stations on the London Underground. So what does this exclude? Nobody knows, and there is no obvious way of finding out.
I put the question directly to Lev: what exactly counts as an experiment? An event qualifies, he replied “if it leads to more than one ‘story’”. He added: “If you toss a coin from your pocket, does it split the world? Say you see tails – is there parallel world with heads?” Well, that was certainly my question. But I was kind of hoping for an answer.
Most popularizers of the MWI are less reticent. In the “multiverse” of the Many Worlds view, says Tegmark, “all possible states exist at every instant”. One can argue about whether that’s the quite same as DeWitt’s version, but either way the result seems to accord with the popular view that everything that is physically possible is realized in one of the parallel universes.
The real problem, however, is that Many Worlders don’t seem keen to think about what this means. No, that’s too kind. They love to think about what it means – but only insofar as it lets them tell us wonderful, lurid and beguiling stories. The MWI seduces us by multiplying our selves beyond measure, giving us fantasy lives in which there is no obvious limit to what we can do. “The act of making a decision”, says Tegmark – a decision here counting as an experiment – “causes a person to split into multiple copies.”
That must be a pretty big deal, right? Not for theoretical physicist Sean Carroll of the California Institute of Technology, whose article “Why the Many-Worlds formulation of quantum mechanics is probably correct” on his popular blog Preposterous Universe makes no mention of these alter egos. Oh, they are there in the background all right – the “copies” of the human observer of a quantum event are casually mentioned in the midst of the 40-page paper by Carroll that his blog cites. But they are nothing compared with the relief of having to fret about wavefunction collapse. It’s as though the burning question about the existence of ghosts is whether they observe the normal laws of mechanics, rather than whether they would radically change our view of our own existence.
But if some Many Worlders are remarkably determined to avert their eyes, others delight in this multiplicity of self. They will contemplate it, however, only insofar as it lets them tell us wonderful, lurid and beguiling stories about fantasy lives in which there is no obvious limit to what we can do, because indeed in some world we’ve already done it.
Most MWI popularizers think they are blowing our minds with this stuff, whereas in fact they are flattering them. They delve into the implications for personhood just far enough to lull us with the uncanniness of the centuries-old Doppelgänger trope, and then flit off again. The result sounds transgressively exciting while familiar enough to be persuasive.
Identity crisis
In what sense are those other copies actually “us”? Brian Greene, another prominent MW advocate, tells us gleefully that “each copy is you.” In other words, you just need to broaden your mind beyond your parochial idea of what “you” means. Each of these individuals has its own consciousness, and so each believes he or she is “you” – but the real “you” is their sum total. Vaidman puts the issue more carefully: all the copies of himself are “Lev Vaidman”, but there’s only one that he can call “me”.
““I” is defined at a particular time by a complete (classical) description of the state of my body and of my brain”, he explains. “At the present moment there are many different “Levs” in different worlds, but it is meaningless to say that now there is another “I”.” Yet it is also scientifically and, I think, logically meaningless to say that there is an “I” at all in his definition, given that we must assume that any “I” is generating copies faster than the speed of thought. A “complete description” of the state of his body and brain never exists.
What’s more, this half-baked stitching together of quantum wavefunctions and the notion of mind leads to a reductio ad absurdum. It makes Lev Vaidman a terrible liar. He is actually a very decent fellow and I don’t want to impugn him, but by his own admission it seems virtually inevitable that “Lev Vaidman” has in other worlds denounced the MWI as a ridiculous fantasy, and has won a Nobel prize for showing, in the face of prevailing opinion, that it is false. (If these scenarios strike you as silly or frivolous, you’re getting the point.) “Lev Vaidman” is probably also a felon, for there is no prescription in the MWI for ruling out a world in which he has killed every physicist who believes in the MWI, or alternatively, every physicist who doesn’t. “OK, those Levs exist – but you should believe me, not them!” he might reply – except that this very belief denies the riposte any meaning.
The difficulties don’t end there. It is extraordinary how attached the MWI advocates are to themselves, as if all the Many Worlds simply have “copies” leading other lives. Vaidman’s neat categorization of “I” and “Lev” works because it sticks to the tidy conceit that the grown-up "I" is being split into ever more "copies" that do different things thereafter. (Not all MWI descriptions will call this copying of selves "splitting" - they say that the copies existed all along - but that doesn't alter the point.)
That isn't, however, what the MWI is really about – it's just a sci-fi scenario derived from it. As Tegmark explains, the MWI is really about all possible states existing at every instant. Some of these, it’s true, must contain essentially indistinguishable Maxes doing and seeing different things. Tegmark waxes lyrical about these: “I feel a strong kinship with parallel Maxes, even though I never get to meet them. They share my values, my feelings, my memories – they’re closer to me than brothers.”
He doesn't trouble his mind about the many, many more almost-Maxes, near-copies with perhaps a gene or two mutated – not to mention the not-much-like Maxes, and so on into a continuum of utterly different beings. Why not? Because you can't make neat ontological statements about them, or embrace them as brothers. They spoil the story, the rotters. They turn it into a story that doesn't make sense, that can't even be told. So they become the mad relatives in the attic. The conceit of “multiple selves” isn’t at all what the MWI, taken at face value, is proposing. On the contrary, it is dismantling the whole notion of selfhood – it is denying any real meaning of “you” at all.
Is that really so different from what we keep hearing from neuroscientists and psychologists – that our comforting notions of selfhood are all just an illusion concocted by the brain to allow us to function? I think it is. There is a gulf between a useful but fragile cognitive construct based on measurable sensory phenomena, and a claim to dissolve all personhood and autonomy because it makes the maths neater. In the Borgesian library of Many Worlds, it seems there can be no fact of the matter about what is or isn’t you, and what you did or didn’t do.
State of mind
Compared with these problems, the difficulty of testing the MWI experimentally (which would seem a requirement of it being truly scientific) is a small matter. ‘It’s trivial to falsify [MWI]’, boasts Carroll: ‘just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes.’ But most other interpretations of quantum theory assume them (at least) too – so an experiment like that would rule them all out, and say nothing about the special status of the MWI. No, we’d quite like to see some evidence for those other universes that this particular interpretation uniquely predicts. That’s just what the hypothesis forbids, you say? What a nuisance.
Might this all simply be a habit of a certain sort of mind? The MWI has a striking parallel in analytic philosophy that goes by the name of modal realism. Ever since Gottfried Leibniz argued that the problem of good and evil can be resolved by postulating that ours is the best of all possible worlds, the notion of “possible worlds” has supplied philosophers with a scheme for debating the issue of the necessity or contingency of truths. The American philosopher David Lewis pushed this line of thought to its limits by asserting, in the position called model realism, that all worlds that are possible have a genuine physical existence, albeit isolated causally and spatiotemporally from ours. On what grounds? Largely on the basis that there is no logical reason to deny their existence, but also because accepting this leads to an economy of axioms: you don’t have to explain away their non-existence. Many philosophers regard this as legerdemain, but the similarities with the MWI of quantum theory are clear: the proposition stems not from any empirical motive but simply because it allegedly simplifies matters (after all, it takes only four words to say “everything possible is real”, right?). Tegmark’s so-called Ultimate Ensemble theory – a many-worlds picture not explicitly predicated on quantum principles but still including them – has been interpreted as a mathematical expression of modal realism, since it proposes that all mathematical entities that can be calculated in principle (that is, which are possible in the sense of being “computable”) must be real. Lewis’s modal realism does, however, at least have the virtue that he thought in some detail about the issues of personal identity it raises.
If I call these ideas fantasies, it is not to deride or dismiss them but to keep in view the fact that beneath their apparel of scientific equations or symbolic logic they are acts of imagination, of “just supposing”. Who can object to imagination? Not me. But when taken to the extreme, parallel universes become a kind of nihilism: if you believe everything then you believe nothing. The MWI allows – perhaps insists – not just on our having cosily familial ‘quantum brothers’ but on worlds where gods, magic and miracles exist and where science is inevitably (if rarely) violated by chance breakdowns of the usual statistical regularities of physics.
Certainly, to say that the world(s) surely can’t be that weird is no objection at all; Many Worlders harp on about this complaint precisely because it is so easily dismissed. MWI doesn’t, though, imply that things really are weirder than we thought; it denies us any way of saying anything, because it entails saying (and doing) everything else too, while at the same time removing the “we” who says it. This does not demand broad-mindedness, but rather a blind acceptance of ontological incoherence.
That its supporters refuse to engage in any depth with the questions the MWI poses about the ontology and autonomy of self is lamentable. But this is (speaking as an ex-physicist) very much a physicist’s blind spot: a failure to recognize, or perhaps to care, that problems arising at a level beyond that of the fundamental, abstract theory can be anything more than a minor inconvenience.
If the MWI were supported by some sound science, we would have to deal with it – and to do so with more seriousness than the merry invention of Doppelgängers to measure both quantum states of a photon. But it is not. It is grounded in a half-baked philosophical argument about a preference to simplify the axioms. Until Many Worlders can take seriously the philosophical implications of their vision, it’s not clear why their colleagues, or the rest of us, should demur from the judgement of the philosopher of science Robert Crease that the MWI is ‘one of the most implausible and unrealistic ideas in the history of science’ [see The Quantum Moment, 2014]. To pretend that the only conceptual challenge for a theory that allows everything conceivable to happen (or at best fails to provide any prescription for precluding the possibilities) is to accommodate Sliding Doors scenarios shows a puzzling lacuna in the formidable minds of its advocates. Perhaps they should stop trying to tell us that philosophy is dead. |
bd9b14ab9dbfca75 | Share This
City College Fellowships Program
Gabriella Clemente
City College Fellows | Mellon Mays Fellows | Alumni Fellows | Staff
Gabriella Clemente
Gabriella Clemente
Mathematics Major
Mellon Mays Fellow
Gabriella Clemente has a calling to understand how things work, to capture the beauty of these processes, and to express it. As a pure mathematics major, she is learning ways to give her calling—and her intuitions—a definite form.
Born in L.A. to an Argentinean mother and a Puerto Rican father, she grew up in Ramos Mejía, Buenos Aires, and attended the ballet academy of the Teatro Colón. In her adolescence, she moved to Moscow to study at МГАХ (aka the Bolshoi Academy), from which she graduated.
After becoming a Fellow, Gabriella studied the time-independent Schrödinger equation in momentum space, in particular, the momentum space, hydrogen atom Schrödinger equation and John Lombardi’s proposed solution. During summer 2014, she completed a mathematics REU at CSU, Fresno, in the well-covered dimension theory of graphs. Currently, she is studying geometric flows, specifically the Ricci Flow. |
e4ca9af8c1bd4963 | Take the 2-minute tour ×
I was wondering about the following:
If you have the time-dependent Schrödinger equation such that $$i \hbar \frac{\partial\psi(x,t)}{\partial t} = - \frac{\hbar^2}{2m} \frac{\partial^2\psi(x,t)}{\partial x^2} + V(x,t) \psi(x,t),$$
where the potential is also time dependent. What is the general strategy to solve this one? Separation of Variables or are there better techniques available? Especially if $V(x,t) = V_1(t)V_2(x)$. For example if you know the solution to $$E_n = - \frac{\hbar^2}{2m} \frac{\partial^2\psi(x,t)}{\partial x^2} + V_2(x) \psi(x),$$ does this help to find the general solution?
share|improve this question
2 Answers 2
up vote 9 down vote accepted
Firstly, there are a few issues with a time-dependent potential, $V(x,t)$. Namely, if we apply Noether's theorem, the conservation of energy may not apply. Specifically, if under a translation,
$$t\to t +t'$$
the Lagrangian $\mathcal{L}=T-V(x,t)$ changes by no more than a total derivative, then conservation of energy will apply, but this resricts the possible $V(x,t)$, depending on the system.
We often treat each Schrödinger equation case by case, as a certain system may lend itself to a different approach, e.g. the harmonic oscillator is easily solved by employing the formalism of creation and annihilation operators. If we consider a time-dependent potential, the equation is generally given by,
$$i\hbar\frac{\partial \psi}{\partial t}=-\frac{\hbar^2}{2m}\frac{\partial^2 \psi}{\partial \mathbf{x}^2} + V(\mathbf{x},t)\psi$$
Depending on $V$, the Laplace or Fourier transform may be employed. Another approach, as mentioned by Jonas, is perturbation theory, whereby we approximate the system as a simpler system, and compute higher order approximations to the fully perturbed system.
As an example, consider the case $V(x,t)=\delta(t)$, in which case the Schrödinger equation becomes,
We can take the Fourier transform with respect to $t$, rather than $x$, to enter angular frequency space:
$$-\hbar\omega \, \Psi(\omega,x)=-\frac{\hbar^2}{2m}\Psi''(\omega,x) + \psi(0,x)$$
which, if the initial conditions are known, is a potentially simple second order differential equation, which one can then apply the inverse Fourier transform to the solution.
share|improve this answer
ah interesting example. could you explain what an appropriate initial condition could be? (a simple example would be great) – Xin Wang Jun 19 '14 at 18:23
sorry, don't know whether I have to ping you... – Xin Wang Jun 19 '14 at 21:55
@user180097: I honestly don't know what an appropriate initial condition would be for $\psi(0,x)$. – JamalS Jun 20 '14 at 5:29
thanks you...I will ask another question about it:-) – Xin Wang Jun 20 '14 at 12:22
I'm aware of no general recipe. If the time-dependent part of $V$ is weak, one can apply time-dependent perturbation theory (TDPT) to calculate corrections to the unperturbed, time-independent solution. This should be contained in any book on quantum mechanics. This way, one can also calculate the transition probabilities and rates. Specifically for periodic perturbations, this leads to Fermi's golden rule which can often be applied without going through the whole machinery of TDPT.
share|improve this answer
Your Answer
|
522df891cf473df8 | Take the 2-minute tour ×
Does the Hamiltonian always translate to the energy of a system? What about in QM? So by the Schrodinger equation, is it true then that $i\hbar{\partial\over\partial t}|\psi\rangle=H|\psi\rangle$ means that $i\hbar{\partial\over\partial t}$ is also an energy operator? How can we interpret this? Thanks.
share|improve this question
For the first part of the question (v1), see also physics.stackexchange.com/q/11905/2451 For the second part of the question (v1), see also physics.stackexchange.com/q/17477/2451 – Qmechanic Oct 13 '11 at 18:33
add comment
2 Answers
up vote 4 down vote accepted
I will formulate the following in such a way, that the language doesn't change too much within the answer. This also emphasizes the analogies of related concepts.
• Classically, you have a configuration/state $\Psi$, which is characterised by coordinates $x^i,v^i$ or $q^i,p_i$ and/or any other relevant parameters. Then an energy is a function or functional of this configuration
$$H:\Psi\mapsto E_\Psi,\ \ \mbox{where}\ \ E_\Psi:=H[\Psi].$$
Here $E_\Psi$ is some real (energy-)value associated with the configuration $\Psi$.
To name an example: Let $q$ and $p$ be the coordinates of your two-dimensional phase space, then every point $\Psi=(q,p)$ characterises a possible configuration. The configuration/state $\Psi$ here is really just the pair of coordinates. The scalar function $H(p,q)=\frac{1}{2m}p^2+\frac{\omega}{2}x^2$ clearly is a map which assigns a scalar energy value $E_\Psi$ to every possible configuration $\Psi$.
The evolution of $\Psi$ in time is determined by $H$, see Hamilton's Equations. This might be viewed as the point of coming up with the Hamiltonian in the first place and it is typically done in such a way, that the energy value $E_\Psi$ will not change with time. See also this thread for a related question. What you call "energy" is pretty much determined by this criterium. In the case of a time independent Hamiltonian (as in the example) and if the time developement of observables $f$ is governed by $\frac{\mathrm{d}f}{\mathrm{d}t} = \{f, H\} + \frac{\partial f}{\partial t}$, then you have $\frac{\mathrm{d}H}{\mathrm{d}t} = \{H, H\} = 0$ and the conservation of the quantiy $E_\Psi:=H[\Psi]$ is evident. Of course, you might want to model friction processes and whatnot and it then might be difficult to define all the relevant quantities.
• In quantum mechanics, your configuration $\Psi$ is given by a state vector $|\Psi\rangle$ (or an equivalence class of such vectors) in some Hilbert space. There are many vectors in this Hilbert space, but there are some vectors $|\Psi_n\rangle$, which also span the whole vector space and which are also special in the following sense: They are eigenvectors of the Hamiltonian operator: $H|\Psi_n\rangle = E_n|\Psi_n\rangle$. Here $E_n$ is just the real eigenvarlue and I assume that I can enumerate the eigenstates by an descrete index $n$. Now for every point in time, your state vector $\Psi$ is just a linear combination of the special states $\{\Psi_n\}$. (As a remark, notice that all the time dependencies of states are left implicit in this post.) Therefore, if you know how $H$ acts on all the $\Psi_n$'s, you know how $H$ acts on any $\Psi$. Since a Hilbert space naturally comes with an inner product, i.e. a map
$$\omega:|\Psi\rangle\times|\Phi\rangle\mapsto\omega(|\Psi\rangle,|\Phi\rangle)\equiv\langle\Psi|\Phi\rangle\in\mathbb{C},\ \ \mbox{satisfying}\ \ \langle\Psi|\Psi\rangle>0\ \ \forall\ \ |\Psi\rangle\ne 0,$$
you can define a new map
$$\omega_H:\Psi\mapsto E_\Psi,\ \ \mbox{where}\ \ E_\Psi:=\omega_H[\Psi],$$
$$\omega_H[\Psi]:=\omega(|\Psi\rangle,H|\Psi\rangle)\equiv\langle\Psi| H|\Psi\rangle.$$
Compare the lines above with the classical case. Here $E_\Psi=\ ...=\langle\Psi| H|\Psi\rangle$ is then called the expectation value of the Hamiltonian in the phyical state. It is the energy value associated with $\Psi$, which is real due to hermiticity of the Hamiltonian. Also, like in the classical case, the time evolution of any state $\Psi$ (resp. state vector $|\Psi\rangle$) is determined by the observable $H$, an operator in the QM-case. And as stated above, exactly this $H$, together with the state/configuration $\Psi$, gives you the energy values $E_\Psi$ associated with $\Psi$. This relation of time and energy is by construction: The Schrödinger equation is an axiom (but a natural one, see conservation of probability), which relates time evolution and Hamiltonian. Now, if the time dependency of the state is governed by the Hamiltonian (whatever it might look like in your scenario), then so is the time dependency of $\langle\Psi| H|\Psi\rangle$.
And if $\ i\hbar\frac{\partial}{\partial t}|\Psi\rangle=H|\Psi\rangle\ $ is true for all vectors in your Hilbert space, i.e. if $i\hbar\frac{\partial}{\partial t}=H$ holds as an operator equation, then these two really are just the same operator. If you ask for an interpretation for this, then I'd suggest you hold on to the quantum mechanical relation between frequency and energy. Regarding the equation which determines time evolution, quantum mechanics is much easier than classical mechanics in a sense, especially if you come with some Lie group theory intuition in your backpack.
share|improve this answer
add comment
The classical example for something where the Hamiltonian is different from the total energy is a particle in an accelerating constraint, like a particle bead sliding on a rotating wire. I will use a different system, a particle of mass m in a long uniformly accelerating box.
If the box is accelerating with acceleration a, in the comoving system, there is a fictitious force on the particle which is derived from a fictitious potential. The comoving Hamiltonian description is the same as for a particle in gravity, so that
$$ H = {p^2\over 2m} + mg x$$
Which is valid for positive x, and the potential is infinite for negative x. Viewing the same particle in the non-accelerated frame, the total energy is just the kinetic energy, and the potential energy restricts the particle from entering the region $x<{at^2\over 2}$. The comoving Hamiltonian is not the energy of the particle, which increases without bound with time, but it gives the dynamical law for the comoving frame wavefunction.
The wavefunction of the particle will (if it can radiate) settle down to the ground state of the moving Hamiltonian. The particle will be in a bound profile against the wall, where the binding is by a linear potential. For the inertial frame, this profile will be accelerating steadily, and its energy does not settle down. The relation between the two is given by boosting the wavefunction by an amount which depends on time.
For systems which are not constrained, the Hamiltonian is always the total energy. This is also true for systems where the constraints do not add energy to the system. The Hamiltonian for systems which add energy is usually explicitly time dependent, but not so in the case where the dynamics is time independent from the point of view of the particle. Mathematically, in such a system you have a nontrivial time translation invariance which is a symmetry, and in the accelerated particle case, this time translation symmetry mixes up inertial frame time translation and boosts.
share|improve this answer
But time is not regarded as an operator but as a parameter in quantum mechanics. Right? Then, is the replacement $E\rightarrow i\hbar\frac{\partial}{\partial t}$ valid? If yes, I would like to know whether it is hermitian. – Roopam Feb 21 at 17:44
add comment
Your Answer
|
b3bd1bcba324883b | You are here
Path Integrals and Hamiltonians: Principles and Methods
Belal E. Baaquie
Cambridge University Press
Publication Date:
Number of Pages:
[Reviewed by
Michael Berg
, on
My own path to, or through, quantum mechanics has been heavily influenced by the fact that I am a pure mathematician and don’t speak physics. I find physics books written for physicists by physicists very difficult to read. This was brought home to me in spades when I started all this ecumenical work in the cause of what are ultimately analytic number theoretic concerns. (See below for a hint.) Happily, I hit upon the book by Prugovečki, Quantum Mechanics in Hilbert Space, which is in my opinion, unsurpassed. It is fundamentally a functional analysis text tailored to the bizarre (but beautiful) axiomatics of quantum mechanics. Prugovečki does a phenomenal job playing off the Heisenberg picture (matrix mechanics) against the Schrödinger picture (wave mechanics), both fitted ultimately into the framework of self-adjoint operators (densely defined) on a Hilbert space, the Hilbert space of states of a quantum mechanical system. It is here that things get dicey, as various maneuvers with projection operators and spectral measures need to be interpreted vis à vis the physical reality suggested by measurements and observations. Thus, the Born rules and the Copenhagen interpretation rear their heads, and we find ourselves explaining to Einstein why Heisenberg had a point, at least epistemologically. On the other hand, in the framework of quantum mechanics proper, the vaunted (and frightening) uncertainty principle is ultimately a Fourier analytic affair: there is a lot of comfort to be derived from mathematics.
Of course, it needs to be said that the book for quantum mechanics is Dirac’s magisterial Principles of Quantum Mechanics. A strong case can be made that this is the most elegant book ever written in the genre; it reads like high literary art, in Dirac’s famous minimalist style. Erik Satie, not Maurice Ravel, perhaps. But his Lucasian professorship notwithstanding, Dirac’s mathematics is not a pure mathematician’s mathematics, brilliant though his presentation and ideas are. For example, there is the matter of notation, which was invented by Dirac for particular purposes. For example, the exploitation of what the isomorphism between a linear space and its dual can provide in the way of symmetry of outcomes (think of \(x^*(y)\) as simultaneously a function of each of the variables) — this leads to the bra-ket notation. To physicists this is soon second nature, but it is not the usual notation we mathematicians use. So I happily ran home to Progovečki.
Dirac was known among (many) other things for not only a reformulation of quantum mechanics in terms of Poisson brackets (on the heels of the pioneering work of Heisenberg and Schrödinger) but for the realization and proof (well, a proof: I believe Schrödinger established it too) that the so-called Heisenberg and Schrödinger pictures of quantum mechanics are mathematically equivalent: matrix mechanics and wave mechanics are the same thing, so there is just quantum mechanics, “QM,” presented in different pictures or from different standpoints. For example, one plays the evolution of a state in space off against the evolution of a state in time, and both perspectives are not just dual but complementary — you get two for the price of one. And in this game, we encounter 1-parameter groups of unitary operators, which, of course, is music to the ears of any analytic number theorist (to put in a plug for what I do). It is these 1-parameter groups of unitary operators that provide the time-evolution of a quantum mechanical system, and are in fact at the heart of an entirely novel way of doing quantum mechanics, that of Feynman.
And this takes us to the book under review. In his Preface, Baaquie says that beyond the traditional approach to QM, that of Schrödinger (via the Schrödinger wave equation), there are two others, “namely the operator approach of Heisenberg and the path integral approach of Dirac-Feynman, that provide a mathematical framework that is independent of the Schrödinger equation.” He goes on to say, “[i]n this book, the Schrödinger equation is never directly solved; instead the Hamiltonian [or total energy] operator is analyzed and path integrals for different quantum and classical random systems are studied to gain an understanding of quantum mechanics.”
After the Preface, Baaquie presents a Synopsis of the book, the thrust of which is that it is divided into six parts. We get, in sequence, fundamental principles and the mathematical structure of QM, stochastic processes, discrete degrees of freedom, quadratic path integrals, acceleration action, and finally nonlinear path integrals. The last mentioned material includes, e.g., coverage of a nonlinear quartic Lagrangian. It should be noted that Feynman’s presentation of his path integral formalism is fundamentally based on a Lagrangian formalism.
The book is well written, in a compact style, but not stinting on clear explanations. I don’t know whether it is because after a good deal of exposure to quantum physics through the services of Proguvečki and others I am more comfortable with this material, and that explains it, but I think the book is accessible even to us mathematicians. Well, let me hedge my bets and add a caveat: I don’t think this book is indicated for raw recruits — you should know some quantum mechanics already, perhaps at the level of Faddeev-Yakubovskiĭ, Lectures on Quantum Mechanics. After this it’s the right time for “a pedagogical introduction to the essential principles of path integrals and Hamiltonians,” as the present book’s back cover advertises.
By the way, there are a few limitations to be noted: there is no discussion of the uncertainty principle (which is not surprising, given the framework Baaquie has chosen), and there are no Feynman diagrams, meaning that the analysis of the integrals is not pushed in the usual direction physicists take it. In the latter connection see for Freeman Dyson’s description of how Feynman himself used (or didn’t use) his integrals. And, once you’ve looked at that, look at (It’s Dyson, after all: just keep going.)
However, Path Integrals and Hamiltonians looks like a very useful book, and I, for one, am very happy to have a copy.
1. Synopsis
Part I. Fundamental Principles:
2. The mathematical structure of quantum mechanics
3. Operators
4. The Feynman path integral
5. Hamiltonian mechanics
6. Path integral quantization
Part II. Stochastic Processes:
7. Stochastic systems
Part III. Discrete Degrees of Freedom:
8. Ising model
9. Ising model: magnetic field
10. Fermions
Part IV. Quadratic Path Integrals:
11. Simple harmonic oscillators
12. Gaussian path integrals
Part V. Action with Acceleration:
13. Acceleration Lagrangian
14. Pseudo-Hermitian Euclidean Hamiltonian
15. Non-Hermitian Hamiltonian: Jordan blocks
16. The quartic potential: instantons
17. Compact degrees of freedom |
f3126d34f51cb27b | Open main menu
Wikibooks β
Advanced Mathematics for Engineers and Scientists/The Laplacian and Laplace's Equation
< Advanced Mathematics for Engineers and Scientists(Redirected from Partial Differential Equations/The Laplacian and Laplace's Equation)
The Laplacian and Laplace's EquationEdit
By now, you've most likely grown sick of the one dimensional transient diffusion PDE we've been playing with:
Make no mistake: we're not nearly done with this stupid thing; but for the sake of variety let's introduce a fresh new equation and, even though it's not strictly a separation of variables concept, a really cool quantity called the Laplacian. You'll like this chapter; it has many pretty pictures in it.
Graph of .
The LaplacianEdit
The Laplacian is a linear operator in Euclidean n-space. There are other spaces with properties different from Euclidean space. Note also that operator here has a very specific meaning. As a function is sort of an operator on real numbers, our operator is an operator on functions, not on the real numbers. See here for a longer explanation.
We'll start with the 3D Cartesian "version". Let . The Laplacian of the function is defined and notated as:
So the operator is taking the sum of the nonmixed second derivatives of with respect to the Cartesian space variables . The "del squared" notation is preferred since the capital delta can be confused with increments and differences, and is too long and doesn't involve pretty math symbols. The Laplacian is also known as the Laplace operator or Laplace's operator, not to be confused with the Laplace transform. Also, note that if we had only taken the first partial derivatives of the function , and put them into a vector, that would have been the gradient of the function . The Laplacian takes the second unmixed derivatives and adds them up.
In one dimension, recall that the second derivative measures concavity. Suppose ; if is positive, is concave up, and if is negative, is concave down, see the graph below with the straight up or down arrows at various points of the curve. The Laplacian may be thought of as a generalization of the concavity concept to multivariate functions.
This idea is demonstrated at the right, in one dimension: . To the left of , the Laplacian (simply the second derivative here) is negative, and the graph is concave down. At , the curve inflects and the Laplacian is . To the right of , the Laplacian is positive and the graph is concave up.
Concavity may or may not do it for you. Thankfully, there's another very important view of the Laplacian, with deep implications for any equation it shows itself in: the Laplacian compares the value of at some point in space to the average of the values of in the neighborhood of the same point. The three cases are:
• If is greater at some point than the average of its neighbors, .
• If is at some point equal to the average of its neighbors, .
• If is smaller at some point than the average of its neighbors, .
So the laplacian may be thought of as, at some point :
The neighborhood of .
The neighborhood of some point is defined as the open set that lies within some Euclidean distance δ (delta) from the point. Referring to the picture at right (a 3D example), the neighborhood of the point is the shaded region which satisfies:
Note that our one dimensional transient diffusion equation, our parallel plate flow, involves the Laplacian:
With this mentality, let's examine the behavior of this very important PDE. On the left is the time derivative and on the right is the Laplacian. This equation is saying that:
The rate of change of at some point is proportional to the difference between the average value of around that point and the value of at that point.
For example, if there's at some position a "hot spot" where is on average greater then its neighbors, the Laplacian will be negative and thus the time derivative will be negative, this will cause to decrease at that position, "cooling" it down. This is illustrated below. The arrows reflect upon the magnitude of the Laplacian and, by grace of the time derivative, the direction the curve will move.
Visualization of transient diffusion.
It's worth noting that in 3D, this equation fully describes the flow of heat in a homogeneous solid that's not generating it's own heat (like too much electricity through a narrow wire would).
Laplace's EquationEdit
Laplace's equation describes a steady state condition, and this is what it looks like:
Solutions of this equation are called harmonic functions. Some things to note:
• Time is absent. This equation describes a steady state condition.
• The absence of time implies the absence of an IC, so we'll be dealing with BVPs rather then IBVPs.
• In one dimension, this is the ODE of a straight line passing through the boundaries at their specified values.
• All functions that satisfy this equation in some domain are analytic (informally, an analytic function is equal to its Taylor expansion) in that domain.
• Despite appearances, solutions of Laplace's equation are generally not minimal surfaces.
• Laplace's equation is linear.
Laplace's equation is separable in the Cartesian (and almost any other) coordinate system. So, we shouldn't have too much problem solving it if the BCs involved aren't too convoluted.
Laplace's Equation on a Square: Cartesian CoordinatesEdit
Steady state conditions on a square.
Imagine a 1 x 1 square plate that's insulated top and bottom and has constant temperatures applied at its uninsulated edges, visualized to the right. Heat is flowing in and out of this thing steadily through the edges only, and since it's "thin" and "insulated", the temperature may be given as . This is the first time we venture into two spatial coordinates, note the absence of time.
Let's make up a BVP, referring to the picture:
So we have one nonhomogeneous BC. Assume that :
As with before, calling the separation constant in favor of just (or something) happens to make the problem easier to solve. Note that the negative sign was kept for the equation: again, these choices happen to make things simpler. Solving each equation and combining them back into :
At edge D:
Note that the constants can be merged, but we won't do it so that a point can be made in a moment. At edge A:
Taking as would satisfy this particular BC, however this would yield a plane solution of , which can't satisfy the temperature at edge C. This is why the constants weren't merged a few steps ago, to make it obvious that may not be . So, we instead take to satisfy the above, and then combine the three constants into one, call it :
Now look at edge B:
It should go without saying by now that can't be zero, since this would yield which couldn't satisfy the nonzero BC. Instead, we can take :
As of now, this solution will satisfy 3 of the 4 BCs. All that is left is edge C, the nonhomogeneous BC.
Neither nor can be contorted to fit this BC.
Since Laplace's equation is linear, a linear combination of solutions to the PDE is also a solution to the PDE. Another thing to note: since the BCs (so far) are homogeneous, we can add the solutions without worrying about nonzero boundaries adding up.
Though as shown above will not solve this problem, we can try summing (based on ) solutions to form a linear combination which might solve the BVP as a whole:
Assuming this form is correct (review Parallel Plate Flow: Realistic IC for motivation), let's again try applying the last BC:
It looks like it needs Fourier series methodology. Finding via orthogonality should solve this problem:
25 term partial sum of the series solution.
was changed to in the last step. Also, for integer , . Note that a Fourier sine expansion has been done. The solution to the BVP can finally be assembled:
That solves it!
It's finally time to mention that the BCs are discontinuous at the points and . As a result, the series should converge slowly at those points. This is clear from the plot at right: it's a 25 term partial sum (note that half of the terms are ), and it looks perfect except at , especially near the discontinuities at and .
Laplace's Equation on a Circle: Polar CoordinatesEdit
Now, we'll specify the value of on a circular boundary. A circle can be represented in Cartesian coordinates without too much trouble; however, it would result in nonlinear BCs which would render the approach useless. Instead, polar coordinates should be used, since in such a system the equation of a circle is very simple. In order for this to be realized, a polar representation of the Laplacian is necessary. Without going in to the details just yet, the Laplacian is given in (2D) polar coordinates:
This result may be derived using differentials and the chain rule; it's not difficult but it's a little long. In these coordinates Laplace's equation reads:
Note that in going from Cartesian to polar coordinates, a price was paid: though still linear, Laplace's equation now has variable coefficients. This implies that after separation at least one of the ODEs will have variable coefficients as well.
Let's make up the following BVP, letting :
This could represent a physical problem analogous to the previous one: replace the square plate with a disc. Note the apparent absence of sufficient BC to obtain a unique solution. The funny looking statement that u is bounded inside the domain of interest turns out to be the key to getting a unique solution, and it often shows itself in polar coordinates. It "makes up" for the "lack" of BCs. To separate, we as usual incorrectly assume that :
Once again, the way the negative sign and the separation constant are arranged makes the solution easier later on. These decisions are made mostly by trial and error.
The equation is probably one you've never seen before, it's a special case of the Euler differential equation (not to be confused with the Euler-Lagrange differential equation). There are a couple of ways to solve it, the most general method would be to change the variables so that an equation with constant coefficients is obtained. An easier way would be to note the pattern in the order of the coefficients and the order of the derivatives, and from there guess a power solution. Either way, the general solution to this simple case of Euler's ODE is given as:
This is a very good example problem since it goes to show that PDE problems very often turn into obscure ODE problems; we got lucky this time since the solution for was rather simple though its ODE looked pretty bad at first sight. The solution to the equation is:
Now, this is where the English sentence condition stating that u must be bounded in the domain of interest may be invoked. As , the term involving is unbounded. The only way to fix this is to take . Note that if this problem were solved between two concentric circles, this term would be nonzero and very important. With that term gone, constants can be merged:
Only one condition remains: on , yet there are 3 constants. Let's say for now that:
Then, it's a simple matter of equating coefficients to obtain:
Now, let's make the frequencies differ:
Equating coefficients won't work. However, if the IC were broken up into individual terms, the sum of the solution to the terms just happens to solve the BVP as a whole:
Verify that the solution above is really equal to the BC at :
And, since Laplace's equation is linear, this must solve the PDE as well. What all of this implies is that, if some generic function may be expressed as a sum of sinusoids with angular frequencies given by , all that is needed is a linear combination of the appropriate sum. Notated:
To identify the coefficients, substitute the BC:
The coefficients and may be determined by a (full) Fourier expansion on . Note that it's implied that must have period since we are solving this in a domain (a circle specifically) where .
You probably don't like infinite series solutions. Well, it happens that through a variety of manipulations it's possible to express the full solution of this particular problem as:
This is called Poisson's integral formula.
Derivation of the Laplacian in Polar CoordinatesEdit
Though not necessarily a PDEs concept, it is very important for anyone studying this kind of math to be comfortable with going from one coordinate system to the next. What follows is a long derivation of the Laplacian in 2D polar coordinates using the multivariable chain rule and the concept of differentials. Know, however, that there are really many ways to do this.
Three definitions are all we need to begin:
If it's known that , then the chain rule may be used to express derivatives in terms of and alone. Two applications will be necessary to obtain the second derivatives. Manipulating operators as if they meant something on their own:
Applying this to itself, treating the underlined bit as a unit dependent on and :
The above mess may be quickly simplified a little by manipulating the funny looking derivatives:
This may be made slightly easier to work with if a few changes are made to the way some of the derivatives are written. Also, the variable follows analogously:
Now we need to obtain expressions for some of the derivatives appearing above. The most direct path would use the concept of differentials. If:
Solving by substitution for and gives:
If , then the total differential is given as:
Note that the two previous equations are of this form (recall that and , just like above), which means that:
Equating coefficients quickly yields a bunch of derivatives:
There's an easier but more abstract way to obtain the derivatives above that may be overkill but is worth mentioning anyway. The Jacobian of the functions and is:
Note that the Jacobian is a compact representation of the coefficients of the total derivative; using as an example (bold indicating vectors):
So, it follows then that the derivatives that we're interested in may be obtained by inverting the Jacobian matrix:
Though somewhat obscure, this is very convenient and it's just one of the many utilities of the Jacobian matrix. An interesting bit of insight is gained: coordinate changes are senseless unless the Jacobian is invertible everywhere except at isolated points, stated another way the determinant of the Jacobian matrix must be nonzero, otherwise the coordinate change is not one-to-one (note that the determinant will be zero at in this example. An isolated point such as this is not problematic.).
Either path you take, there should now be enough information to evaluate the Cartesian second derivatives. Working on :
Proceeding similarly for :
Now, add these tirelessly hand crafted differential operators and watch the result collapse into just 3 nontrigonometric terms:
That was a lot of work. To save trouble, here is the Laplacian in other two other popular coordinate systems:
Derivatives have been combined wherever possible (not done previously).
Concluding RemarksEdit
This was a long, involved chapter. It should be clear that the solutions derived work only for very simple geometries, other geometries may be worked with by grace of conformal mappings.
The Laplacian (and variations of it) is a very important quantity and its behaviour is worth knowing like the back of your hand. A sampling of important equations that involve the Laplacian:
• The Navier Stokes equations.
• The diffusion equation.
• Laplace's equation.
• Poisson's equation.
• The Helmholtz equation.
• The Schrödinger equation.
• The wave equation.
There's a couple of other operators that are similar to (though less important than) the Laplacian, which deserve mention:
• Biharmonic operator, in three Cartesian dimensions:
The biharmonic equation is useful in linear elastic theory, for example it can describe "creeping" fluid flow:
• d'Alembertian:
The wave equation may be expressed using the d'Alembertian:
Though expressed with the Laplacian is more popular: |
710558e9a79c8782 | Journal article
A general approach to the electronic spin relaxation of Gd(III) complexes in solutions. Monte Carlo simulations beyond the Redfield limit
The time correlation functions of the electronic spin components of a metal ion without orbital degeneracy in solution are computed. The approach is based on the numerical solution of the time-dependent Schrödinger equation for a stochastic perturbing Hamiltonian which is simulated by a Monte Carlo algorithm using discrete time steps. The perturbing Hamiltonian is quite general, including the superposition of both the static mean crystal field contribution in the molecular frame and the usual transient ligand field term. The Hamiltonian of the static crystal field can involve the terms of all orders, which are invariant under the local group of the average geometry of the complex. In the laboratory frame, the random rotation of the complex is the only source of modulation of this Hamiltonian, whereas an additional Ornstein–Uhlenbeck process is needed to describe the time fluctuations of the Hamiltonian of the transient crystal field. A numerical procedure for computing the electronic paramagnetic resonance (EPR) spectra is proposed and discussed. For the [Gd(H2O)8]3+ octa-aqua ion and the [Gd(DOTA)(H2O)]– complex [DOTA = 1,4,7,10-tetrakis(carboxymethyl)-1,4,7,10-tetraazacyclo dodecane] in water, the predictions of the Redfield relaxation theory are compared with those of the Monte Carlo approach. The Redfield approximation is shown to be accurate for all temperatures and for electronic resonance frequencies at and above X-band, justifying the previous interpretations of EPR spectra. At lower frequencies the transverse and longitudinal relaxation functions derived from the Redfield approximation display significantly faster decays than the corresponding simulated functions. The practical interest of this simulation approach is underlined.
Related material |
55d3c757beb01ec0 | Perturbation theory (quantum mechanics)
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Applications of perturbation theory[edit]
In the theory of quantum electrodynamics (QED), in which the electronphoton interaction is treated perturbatively, the calculation of the electron's magnetic moment has been found to agree with experiment to eleven decimal places.[1] In QED and other quantum field theories, special calculation techniques known as Feynman diagrams are used to systematically sum the power series terms.
Perturbation theory also fails to describe states that are not generated adiabatically from the "free model", including bound states and various collective phenomena such as solitons. Imagine, for example, that we have a system of free (i.e. non-interacting) particles, to which an attractive interaction is introduced. Depending on the form of the interaction, this may create an entirely new set of eigenstates corresponding to groups of particles bound to one another. An example of this phenomenon may be found in conventional superconductivity, in which the phonon-mediated attraction between conduction electrons leads to the formation of correlated electron pairs known as Cooper pairs. When faced with such systems, one usually turns to other approximation schemes, such as the variational method and the WKB approximation. This is because there is no analogue of a bound particle in the unperturbed model and the energy of a soliton typically goes as the inverse of the expansion parameter. However, if we "integrate" over the solitonic phenomena, the nonperturbative corrections in this case will be tiny; of the order of exp(−1/g) or exp(−1/g2) in the perturbation parameter g. Perturbation theory can only detect solutions "close" to the unperturbed solution, even if there are other solutions for which the perturbative expansion is not valid.
The problem of non-perturbative systems has been somewhat alleviated by the advent of modern computers. It has become practical to obtain numerical non-perturbative solutions for certain problems, using methods such as density functional theory. These advances have been of particular benefit to the field of quantum chemistry. Computers have also been used to carry out perturbation theory calculations to extraordinarily high levels of precision, which has proven important in particle physics for generating theoretical results that can be compared with experiment.
Time-independent perturbation theory[edit]
Time-independent perturbation theory is one of two categories of perturbation theory, the other being time-dependent perturbation (see next section). In time-independent perturbation theory the perturbation Hamiltonian is static (i.e., possesses no time dependence). Time-independent perturbation theory was presented by Erwin Schrödinger in a 1926 paper,[2] shortly after he produced his theories in wave mechanics. In this paper Schrödinger referred to earlier work of Lord Rayleigh,[3] who investigated harmonic vibrations of a string perturbed by small inhomogeneities. This is why this perturbation theory is often referred to as Rayleigh–Schrödinger perturbation theory.
First order corrections[edit]
We begin[4] with an unperturbed Hamiltonian H0, which is also assumed to have no time dependence. It has known energy levels and eigenstates, arising from the time-independent Schrödinger equation:
For simplicity, we have assumed that the energies are discrete. The (0) superscripts denote that these quantities are associated with the unperturbed system. Note the use of bra–ket notation.
We now introduce a perturbation to the Hamiltonian. Let V be a Hamiltonian representing a weak physical disturbance, such as a potential energy produced by an external field. (Thus, V is formally a Hermitian operator.) Let λ be a dimensionless parameter that can take on values ranging continuously from 0 (no perturbation) to 1 (the full perturbation). The perturbed Hamiltonian is
The energy levels and eigenstates of the perturbed Hamiltonian are again given by the Schrödinger equation:
Our goal is to express En and in terms of the energy levels and eigenstates of the old Hamiltonian. If the perturbation is sufficiently weak, we can write them as power series in λ:
When λ = 0, these reduce to the unperturbed values, which are the first term in each series. Since the perturbation is weak, the energy levels and eigenstates should not deviate too much from their unperturbed values, and the terms should rapidly become smaller as we go to higher order.
Substituting the power series expansion into the Schrödinger equation, we obtain
Expanding this equation and comparing coefficients of each power of λ results in an infinite series of simultaneous equations. The zeroth-order equation is simply the Schrödinger equation for the unperturbed system. The first-order equation is
Operating through by . The first term on the left-hand side cancels with the first term on the right-hand side. (Recall, the unperturbed Hamiltonian is Hermitian). This leads to the first-order energy shift:
This is simply the expectation value of the perturbation Hamiltonian while the system is in the unperturbed state. This result can be interpreted in the following way: suppose the perturbation is applied, but we keep the system in the quantum state , which is a valid quantum state though no longer an energy eigenstate. The perturbation causes the average energy of this state to increase by . However, the true energy shift is slightly different, because the perturbed eigenstate is not exactly the same as . These further shifts are given by the second and higher order corrections to the energy.
Before we compute the corrections to the energy eigenstate, we need to address the issue of normalization. We may suppose
but perturbation theory assumes we also have . It follows that at first order in λ, we must have
Since the overall phase is not determined in quantum mechanics, without loss of generality, we may assume is purely real. Therefore,
and we deduce
To obtain the first-order correction to the energy eigenstate, we insert our expression for the first-order energy correction back into the result shown above of equating the first-order coefficients of λ. We then make use of the resolution of the identity,
where the are in the orthogonal complement of . The first-order equation may thus be expressed as
For the moment, suppose that the zeroth-order energy level is not degenerate, i.e. there is no eigenstate of H0 in the orthogonal complement of with the energy . After renaming the summation dummy index above as , we can pick any , and multiply through by giving
We see that the above also gives us the component of the first-order correction along .
Thus in total we get,
The first-order change in the n-th energy eigenket has a contribution from each of the energy eigenstates kn. Each term is proportional to the matrix element , which is a measure of how much the perturbation mixes eigenstate n with eigenstate k; it is also inversely proportional to the energy difference between eigenstates k and n, which means that the perturbation deforms the eigenstate to a greater extent if there are more eigenstates at nearby energies. We see also that the expression is singular if any of these states have the same energy as state n, which is why we assumed that there is no degeneracy.
Second-order and higher corrections[edit]
We can find the higher-order deviations by a similar procedure, though the calculations become quite tedious with our current formulation. Our normalization prescription gives that
Up to second order, the expressions for the energies and (normalized) eigenstates are:
Extending the process further, the third-order energy correction can be shown to be [5]
Effects of degeneracy[edit]
Suppose that two or more energy eigenstates are degenerate. The first-order energy shift is not well defined, since there is no unique way to choose a basis of eigenstates for the unperturbed system. The various eigenstates for a given energy will perturb with different energies, or may will possess no continuous family of perturbations at all. This is manifested in the calculation of the perturbed eigenstate via the fact that the operator
does not have a well-defined inverse.
Let D denote the subspace spanned by these degenerate eigenstates. No matter how small the perturbation is, in the degenerate subspace D the energy differences between the eigenstates H0 are zero, so complete mixing of at least some of these states is assured. Typically the eigenvalues will split and the eigenspaces will become simple (one-dimensional), or at least get smaller dimension than D. The successful perturbations will not be "small" relative to a poorly chosen basis of D. Instead, we consider the perturbation "small" if the new eigenstate is close to the subspace D. The new Hamiltonian must be diagonalized in D, or a slight variation of D, so to speak. These correct perturbed eigenstates in D are now the basis for the perturbation expansion:
For the first-order perturbation we need to solve the perturbed Hamiltonian restricted to the degenerate subspace D
simultaneously for all the degenerate eigenstates, where are first-order corrections to the degenerate energy levels, and "small" is a small vector orthogonal to D. This is equivalent to diagonalizing the matrix[clarification needed]
This procedure is approximate, since we neglected states outside the D subspace. The splitting of degenerate energies is generally observed. Although the splitting may be small compared to the range of energies found in the system, it is crucial in understanding certain details, such as spectral lines in Electron Spin Resonance experiments.
Higher-order corrections due to other eigenstates can be found in the same way as for the non-degenerate case
The operator on the left hand side is not singular when applied to eigenstates outside D, so we can write
but the effect on the degenerate states is minuscule, proportional to the square of the first-order correction .
Near-degenerate states should also be treated in the above manner, since the original Hamiltonian won't be larger than the perturbation in the near-degenerate subspace.[clarification needed] An application is found in the nearly free electron model, where near-degeneracy treated properly gives rise to an energy gap even for small perturbations. Other eigenstates will only shift the absolute energy of all near-degenerate states simultaneously.
Generalization to multi-parameter case[edit]
The generalization of the time-independent perturbation theory to the case where there are multiple small parameters in place of λ can be formulated more systematically using the language of differential geometry, which basically defines the derivatives of the quantum states and calculates the perturbative corrections by taking derivatives iteratively at the unperturbed point.
Hamiltonian and force operator[edit]
From the differential geometric point of view, a parameterized Hamiltonian is considered as a function defined on the parameter manifold that maps each particular set of parameters to an Hermitian operator H(x μ) that acts on the Hilbert space. The parameters here can be external field, interaction strength, or driving parameters in the quantum phase transition. Let En(x μ) and be the n-th eigenenergy and eigenstate of H(x μ) respectively. In the language of differential geometry, the states form a vector bundle over the parameter manifold, on which derivatives of these states can be defined. The perturbation theory is to answer the following question: given and at an unperturbed reference point , how to estimate the En(x μ) and at x μ close to that reference point.
Without loss of generality, the coordinate system can be shifted, such that the reference point is set to be the origin. The following linearly parameterized Hamiltonian is frequently used
If the parameters x μ are considered as generalized coordinates, then Fμ should be identified as the generalized force operators related to those coordinates. Different indices μ label the different forces along different directions in the parameter manifold. For example, if x μ denotes the external magnetic field in the μ-direction, then Fμ should be the magnetization in the same direction.
Perturbation theory as power series expansion[edit]
The validity of the perturbation theory lies on the adiabatic assumption, which assumes the eigenenergies and eigenstates of the Hamiltonian are smooth functions of parameters such that their values in the vicinity region can be calculated in power series (like Taylor expansion) of the parameters:
Here μ denotes the derivative with respect to x μ. When applying to the state , it should be understood as the covariant derivative if the vector bundle is equipped with non-vanishing connection. All the terms on the right-hand-side of the series are evaluated at x μ = 0, e.g. EnEn(0) and . This convention will be adopted throughout this subsection, that all functions without the parameter dependence explicitly stated are assumed to be evaluated at the origin. The power series may converge slowly or even not converge when the energy levels are close to each other. The adiabatic assumption breaks down when there is energy level degeneracy, and hence the perturbation theory is not applicable in that case.
Hellman–Feynman theorems[edit]
The above power series expansion can be readily evaluated if there is a systematic approach to calculate the derivates to any order. Using the chain rule, the derivatives can be broken down to the single derivative on either the energy or the state. The Hellmann–Feynman theorems are used to calculate these single derivatives. The first Hellmann–Feynman theorem gives the derivative of the energy,
The second Hellmann–Feynman theorem gives the derivative of the state (resolved by the complete basis with m ≠ n),
For the linearly parameterized Hamiltonian, μH simply stands for the generalized force operator Fμ.
The theorems can be simply derived by applying the differential operator μ to both sides of the Schrödinger equation which reads
Then overlap with the state from left and make use of the Schrödinger equation again,
Given that the eigenstates of the Hamiltonian always form an orthonormal basis , the cases of m = n and mn can be discussed separately. The first case will lead to the first theorem and the second case to the second theorem, which can be shown immediately by rearranging the terms. With the differential rules given by the Hellmann–Feynman theorems, the perturbative correction to the energies and states can be calculated systematically.
Correction of energy and state[edit]
To the second order, the energy correction reads
where denotes the real part function. The first order derivative μEn is given by the first Hellmann–Feynman theorem directly. To obtain the second order derivative μνEn, simply applying the differential operator μ to the result of the first order derivative , which reads
Note that for linearly parameterized Hamiltonian, there is no second derivative μνH = 0 on the operator level. Resolve the derivative of state by inserting the complete set of basis,
then all parts can be calculated using the Hellmann–Feynman theorems. In terms of Lie derivatives, according to the definition of the connection for the vector bundle. Therefore, the case m = n can be excluded from the summation, which avoids the singularity of the energy denominator. The same procedure can be carried on for higher order derivatives, from which higher order corrections are obtained.
The same computational scheme is applicable for the correction of states. The result to the second order is as follows
Both energy derivatives and state derivatives will be involved in deduction. Whenever a state derivative is encountered, resolve it by inserting the complete set of basis, then the Hellmann-Feynman theorem is applicable. Because differentiation can be calculated systematically, the series expansion approach to the perturbative corrections can be coded on computers with symbolic processing software like Mathematica.
Effective Hamiltonian[edit]
Let H(0) be the Hamiltonian completely restricted either in the low-energy subspace or in the high-energy subspace , such that there is no matrix element in H(0) connecting the low- and the high-energy subspaces, i.e. if . Let Fμ = ∂μH be the coupling terms connecting the subspaces. Then when the high energy degrees of freedoms are integrated out, the effective Hamiltonian in the low energy subspace reads[6]
Here are restricted in the low energy subspace. The above result can be derived by power series expansion of .
In a formal way it is possible to define an effective Hamiltonian that gives exactly the low-lying energy states and wavefunctions.[7] In practice, some kind of approximation (perturbation theory) is generally required.
Time-dependent perturbation theory[edit]
Method of variation of constants[edit]
Since the perturbed Hamiltonian is time-dependent, so are its energy levels and eigenstates. Thus, the goals of time-dependent perturbation theory are slightly different from time-independent perturbation theory. One is interested in the following quantities:
• The time-dependent expectation value of some observable A, for a given initial state.
• The time-dependent amplitudes[clarification needed] of those quantum states that are energy eigenkets (eigenvectors) in the unperturbed system.
The first quantity is important because it gives rise to the classical result of an A measurement performed on a macroscopic number of copies of the perturbed system. For example, we could take A to be the displacement in the x-direction of the electron in a hydrogen atom, in which case the expected value, when multiplied by an appropriate coefficient, gives the time-dependent dielectric polarization of a hydrogen gas. With an appropriate choice of perturbation (i.e. an oscillating electric potential), this allows one to calculate the AC permittivity of the gas.
We will briefly examine the method behind Dirac's formulation of time-dependent perturbation theory. Choose an energy basis for the unperturbed system. (We drop the (0) superscripts for the eigenstates, because it is not useful to speak of energy levels and eigenstates for the perturbed system.)
If the unperturbed system is in eigenstate at time t = 0, its state at subsequent times varies only by a phase (in the Schrödinger picture, where state vectors evolve in time and operators are constant),
Now, introduce a time-dependent perturbing Hamiltonian V(t). The Hamiltonian of the perturbed system is
Let denote the quantum state of the perturbed system at time t. It obeys the time-dependent Schrödinger equation,
The quantum state at each instant can be expressed as a linear combination of the complete eigenbasis of :
where the cn(t)s are to be determined complex functions of t which we will refer to as amplitudes (strictly speaking, they are the amplitudes in the Dirac picture).
We have explicitly extracted the exponential phase factors on the right hand side. This is only a matter of convention, and may be done without loss of generality. The reason we go to this trouble is that when the system starts in the state and no perturbation is present, the amplitudes have the convenient property that, for all t, cj(t) = 1 and cn(t) = 0 if n ≠ j.
The square of the absolute amplitude cn(t) is the probability that the system is in state n at time t, since
Plugging into the Schrödinger equation and using the fact that ∂/∂t acts by a chain rule, one obtains
By resolving the identity in front of V, this can be reduced to a set of partial differential equations for the amplitudes,
The matrix elements of V play a similar role as in time-independent perturbation theory, being proportional to the rate at which amplitudes are shifted between states. Note, however, that the direction of the shift is modified by the exponential phase factor. Over times much longer than the energy difference EkEn, the phase winds around 0 several times. If the time-dependence of V is sufficiently slow, this may cause the state amplitudes to oscillate. ( E.g., such oscillations are useful for managing radiative transitions in a laser.)
Up to this point, we have made no approximations, so this set of differential equations is exact. By supplying appropriate initial values cn(t), we could in principle find an exact (i.e., non-perturbative) solution. This is easily done when there are only two energy levels (n = 1, 2), and this solution is useful for modelling systems like the ammonia molecule.
However, exact solutions are difficult to find when there are many energy levels, and one instead looks for perturbative solutions. These may be obtained by expressing the equations in an integral form,
Repeatedly substituting this expression for cn back into right hand side, yields an iterative solution,
where, for example, the first-order term is
Several further results follow from this, such as Fermi's golden rule, which relates the rate of transitions between quantum states to the density of states at particular energies; or the Dyson series, obtained by applying the iterative method to the time evolution operator, which is one of the starting points for the method of Feynman diagrams.
Method of Dyson series[edit]
Time-dependent perturbations can be reorganized through the technique of the Dyson series. The Schrödinger equation
has the formal solution
where T is the time ordering operator,
Thus, the exponential represents the following Dyson series,
Note that in the second term, the 1/2! factor exactly cancels the double contribution due to the time-ordering operator, etc.
Consider the following perturbation problem
assuming that the parameter λ is small and that the problem has been solved.
Perform the following unitary transformation to the interaction picture (or Dirac picture),
Consequently, the Schrödinger equation simplifies to
so it is solved through the above Dyson series,
as a perturbation series with small λ.
Using the solution of the unperturbed problem and (for the sake of simplicity assume a pure discrete spectrum), yields, to first order,
Thus, the system, initially in the unperturbed state , by dint of the perturbation can go into the state . The corresponding transition probability amplitude to first order is
as detailed in the previous section——while the corresponding transition probability to a continuum is furnished by Fermi's golden rule.
As an aside, note that time-independent perturbation theory is also organized inside this time-dependent perturbation theory Dyson series. To see this, write the unitary evolution operator, obtained from the above Dyson series, as
and take the perturbation V to be time-independent.
Using the identity resolution
with for a pure discrete spectrum, write
It is evident that, at second order, one must sum on all the intermediate states. Assume and the asymptotic limit of larger times. This means that, at each contribution of the perturbation series, one has to add a multiplicative factor in the integrands for ε arbitrarily small. Thus the limit t → ∞ gives back the final state of the system by eliminating all oscillating terms, but keeping the secular ones. The integrals are thus computable, and, separating the diagonal terms from the others yields
where the time secular series yields the eigenvalues of the perturbed problem specified above, recursively; whereas the remaining time-constant part yields the corrections to the stationary eigenfunctions also given above (.)
The unitary evolution operator is applicable to arbitrary eigenstates of the unperturbed problem and, in this case, yields a secular series that holds at small times.
Strong perturbation theory[edit]
In a similar way as for small perturbations, it is possible to develop a strong perturbation theory. Let us consider as usual the Schrödinger equation
and we consider the question if a dual Dyson series exists that applies in the limit of a perturbation increasingly large. This question can be answered in an affirmative way [8] and the series is the well-known adiabatic series.[9] This approach is quite general and can be shown in the following way. Let us consider the perturbation problem
being λ→ ∞. Our aim is to find a solution in the form
but a direct substitution into the above equation fails to produce useful results. This situation can be adjusted making a rescaling of the time variable as producing the following meaningful equations
that can be solved once we know the solution of the leading order equation. But we know that in this case we can use the adiabatic approximation. When does not depend on time one gets the Wigner-Kirkwood series that is often used in statistical mechanics. Indeed, in this case we introduce the unitary transformation
that defines a free picture as we are trying to eliminate the interaction term. Now, in dual way with respect to the small perturbations, we have to solve the Schrödinger equation
and we see that the expansion parameter λ appears only into the exponential and so, the corresponding Dyson series, a dual Dyson series, is meaningful at large λs and is
After the rescaling in time we can see that this is indeed a series in justifying in this way the name of dual Dyson series. The reason is that we have obtained this series simply interchanging H0 and V and we can go from one to another applying this exchange. This is called duality principle in perturbation theory. The choice yields, as already said, a Wigner-Kirkwood series that is a gradient expansion. The Wigner-Kirkwood series is a semiclassical series with eigenvalues given exactly as for WKB approximation.[10]
Example of first order perturbation theory – ground state energy of the quartic oscillator[edit]
Let us consider the quantum harmonic oscillator with the quartic potential perturbation and the Hamiltonian
The ground state of the harmonic oscillator is
() and the energy of unperturbed ground state is
Using the first order correction formula we get
Example of first and second order perturbation theory – quantum pendulum[edit]
Consider the quantum mathematical pendulum with the Hamiltonian
with the potential energy taken as the perturbation i.e.
The unperturbed normalized quantum wave functions are those of the rigid rotor and are given by
and the energies
The first order energy correction to the rotor due to the potential energy is
Using the formula for the second order correction one gets
See also[edit]
1. ^ Aoyama, Tatsumi; Hayakawa, Masashi; Kinoshita, Toichiro; Nio, Makiko (2012). "Tenth-order QED lepton anomalous magnetic moment: Eighth-order vertices containing a second-order vacuum polarization". Physical Review D. American Physical Society. 85 (3): 033007. arXiv:1110.2826Freely accessible. Bibcode:2012PhRvD..85c3007A. doi:10.1103/PhysRevD.85.033007.
2. ^ Schrödinger, E. (1926). "Quantisierung als Eigenwertproblem" [Quantification of the eigen value problem]. Annalen der Physik (in German). 80 (13): 437–490. Bibcode:1926AnP...385..437S. doi:10.1002/andp.19263851302.
3. ^ Rayleigh, J. W. S. (1894). Theory of Sound. I (2nd ed.). London: Macmillan. pp. 115–118. ISBN 1-152-06023-6.
5. ^ Landau, L. D.; Lifschitz, E. M. Quantum Mechanics: Non-relativistic Theory (3rd ed.). ISBN 0-08-019012-X.
6. ^ Bir, Gennadiĭ Levikovich; Pikus, Grigoriĭ Ezekielevich (1974). "Chapter 15: Perturbation theory for the degenerate case". Symmetry and Strain-induced Effects in Semiconductors. ISBN 978-0-470-07321-6.
7. ^ Soliverez, Carlos E. (1981). "General Theory of Effective Hamiltonians". Physical Review A. 24: 4–9. Bibcode:1981PhRvA..24....4S. doi:10.1103/PhysRevA.24.4.
8. ^ Frasca, M. (1998). "Duality in Perturbation Theory and the Quantum Adiabatic Approximation". Physical Review A. 58 (5): 3439. arXiv:hep-th/9801069Freely accessible. Bibcode:1998PhRvA..58.3439F. doi:10.1103/PhysRevA.58.3439.
9. ^ Mostafazadeh, A. (1997). "Quantum adiabatic approximation and the geometric phase,". Physical Review A. 55 (3): 1653. arXiv:hep-th/9606053Freely accessible. Bibcode:1997PhRvA..55.1653M. doi:10.1103/PhysRevA.55.1653.
10. ^ Frasca, Marco (2007). "A strongly perturbed quantum system is a semiclassical system". Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences. 463 (2085): 2195. arXiv:hep-th/0603182Freely accessible. Bibcode:2007RSPSA.463.2195F. doi:10.1098/rspa.2007.1879. |
ee6db1525fc504dc | About this Journal Submit a Manuscript Table of Contents
Journal of Function Spaces and Applications
Volume 2013 (2013), Article ID 968603, 13 pages
Research Article
Energy Scattering for Schrödinger Equation with Exponential Nonlinearity in Two Dimensions
School of Mathematical Sciences, Peking University, Beijing 100871, China
Received 9 January 2013; Accepted 24 February 2013
Academic Editor: Baoxiang Wang
When the spatial dimensions , the initial data , and the Hamiltonian , we prove that the scattering operator is well defined in the whole energy space for nonlinear Schrödinger equation with exponential nonlinearity , where .
1. Introduction
We consider the Cauchy problem for the following nonlinear Schrödinger equation: in two spatial dimensions with initial data and . Solutions of the above problem satisfy the conservation of mass and Hamiltonian: where
Nakamura and Ozawa [1] showed the existence and uniqueness of the scattering operator of (1) with (2). Then, Wang [2] proved the smoothness of this scattering operator. However, both of these results are based on the assumption of small initial data . In this paper, we remove this assumption and show that for arbitrary initial data and , the scattering operator is always well defined.
Wang et al. [3] proved the energy scattering theory of (1) with , where and the spatial dimension . Ibrahim et al. [4] showed the existence and asymptotic completeness of the wave operators for (1) with when the spatial dimensions , , and . Under the same assumptions as [4], Colliander et al. [5] proved the global well-posedness of (1) with (2).
Theorem 1. Assume that , , and . Then problem (1) with (2) has a unique global solution in the class .
Remark 2. In fact, by the proof in [5], the global well-posedness of (1) with (2) is also true for .
In this paper, we further study the scattering of this problem. Note that . Nakanishi [6] proved the existence of the scattering operators in the whole energy space for (1) with when . Then, Killip et al. [7] and Dodson [8] proved the existence of the scattering operators in for (1) with . Inspired by these two works, we use the concentration compactness method, which was introduced by Kenig and Merle in [9], to prove the existence of the scattering operators for (1) with (2).
For convenience, we write (1) and (2) together; that is, where and . Our main result is as follows.
Theorem 3. Assume that the initial data , , and . Let be a global solution of (5). Then
In Section 2, Lemma 9 will show us that Theorem 3 implies the following scattering result.
Theorem 4. Assume that the initial data , , and . Then the solution of (5) is scattering in the energy space .
We will prove Theorem 3 by contradiction in Section 5. In Section 2, we give some nonlinear estimates. In Section 3, we prove the stability of solutions. In Section 4, we give a new profile decomposition for sequence which will be used to prove concentration compactness.
Now, we introduce some notations:
We define
For Banach space , , or , we denote
When , is abbreviated to . When or is infinity or when the domain is replaced by , we make the usual modifications. Specially, we denote
For , we split , where
For any two Banach spaces and , . denotes positive constant. If depends upon some parameters, such as , we will indicate this with .
Remark 5. Note that in Theorem 3; we only need to prove the result for , . Hence, we always suppose that in the context.
Moreover, we always suppose that the initial data of (5) satisfies and .
2. Nonlinear Estimates
In order to estimate (2), we need the following Trudinger-type inequality.
Lemma 6 (see [10]). Let . Then for all satisfying , one has
Note that for for all ,
By Lemma 6 and Hölder inequality, for and for all , we have and thus
Lemma 7 (Strichartz estimates). For or , (the pairs were called admissible pairs) we have
Lemma 8 (see [3, Proposition 2.3]). Let be fixed indices. Then for any ,
As shown in [6, 11], to obtain the scattering result, it suffices to show that any finite energy solution has a finite global space-time norm. In fact, if Theorem 3 is true, we have the following theorem.
Lemma 9 (Theorem 3 implies Theorem 4). Let be a global solution of (5), , and . Then, for all admissible pairs, we have
Moreover, there exist such that
Proof. Defining , , by Strichartz estimates, (14) and (15),
Using the same way as in Bourgain [12], one can split into finitely many pairwise disjoint intervals:
By (21),
Since and can be chosen small arbitrarily, by interpolation, for all admissible pairs and . The desired result (19) follows.
By (19) and (21),
Thus, were well defined and belong to . Since we must have (20) was proved.
3. Stability
Lemma 10 (stability). For any and , there exists with the following property: suppose that satisfies for all , and approximately solves (5) in the sense that
Then for any initial data satisfying and , there is a unique global solution to (5) satisfying .
Proof. Denote , then and . Let . By the similar estimates as (21), we have
Then we subdivide the time interval into finite subintervals , , such that for each . Let be small such that
Then by (31) on , we have and
Using the same analysis as above, we can get . Iterating this for , we obtain ; the desired result was obtained.
4. Linear Profile Decomposition
In this section, we will give the linear profile decomposition for Schrödinger equation in . First, we give some definitions and lemmas.
Definition 11 (symmetry group, [13]). For any phase , position , frequency , and scaling parameter , we define the unitary transformation by the formula
We let be the collection of such transformations; this is a group with identity , inverse , and group law
If is a function, we define , where by the formula or equivalently
If , we can easily prove that and .
Definition 12 (enlarged group, [13]). For any phase , position , frequency , scaling parameter , and time , we define the unitary transformation by the formula or in other words
Let be the collection of such transformations. We also let act on global space-time function by defining or equivalently
Lemma 13 (linear profiles for sequence, [14]). Let be a bounded sequence in . Then (after passing to a subsequence if necessary) there exists a family , of functions in and group elements for such that one has the decomposition for all ; here, is such that its linear evolution has asymptotically vanishing scattering size:
Moreover, for any ,
Furthermore, for any , one has the mass decoupling property For any , we have
Remark 14. If the orthogonal condition (45) holds, then (see [14])
Moreover, if , then (see [14, 15]), for any , If , then (see [16, Lemma 5.5])
Remark 15. As each linear profile in Lemma 13 is constructed in the sense that weakly in (see [14]), after passing to a subsequence in , rearrangement, translation, and refining accordingly, we may assume that the parameters satisfy the following properties:(i) as , or for all ;(ii) or as , or for all ;(iii) as , or with ;(iv)when , and , we can let .
Our main result in this section is the following lemma.
Lemma 16 (linear profiles for sequence). Let be a bounded sequence in . Then up to a subsequence, for any , there exists a sequence in and a sequence of group elements such that
Here, for each , and must satisfy is such that
Moreover, for any , one has the same orthogonal conditions as (45). For any , one has the following decoupling properties:
Proof. Let
Then, we have
By Lemma 13, after passing to a subsequence if necessary, we can obtain with the stated properties (i)–(iv) in Remark 15 and (43)–(47). Denote
Step 1. We prove that with and for each fixed , where
By (44) and , (64) holds obviously. For (62), we prove it by induction. For every , suppose that
Case 1. If , we have .
In fact, by (66),
Using (47),
By direct calculation,
Let . When , When , When , When ,
By (68)–(74), and thus .
Case 2. If , we can prove
By absorbing the error into , we can suppose . Since for each fixed , we must have .
Now, we begin to prove (75). Let be the characteristic function of the set and , and then where
Note that We have
When , we have . Choosing , then by (79), , the desired result follows.
When and , we have
When and , we denote and . The line (when , we use the line instead) separates the frequency space into two half-planes. We let to be the half-plane which contains the point , and then
By (79), we have . Note that (75) holds.
When and , let be the half-plane which does NOT contain the point ; we can prove (75) similarly as above.
By the proof above, we get and . Denote and suppose
Repeating the proof above, we can get , , and ; by induction, we obtain (62).
By the orthogonal condition (45), following the proof in [14], we can obtain that for fix and for all , (63) were proved.
Step 2. For arbitrary , we define if the orthogonal condition (45) is NOT true for any subsequence; that is,
By the definition above, if , we have
Note that By Remark 15, we can put these two profiles together as one profile. Then, by denoting / as , we can obtain the sequence , ; and (52)–(56) were proved.
Specially, since for each , we have for fixed and , and hence for any fixed and .
Step 3. We prove (57) now. By (56), we only need to prove that for all , ,
As and for , By (54), we have
We separate the set into two subsets:
When ,
Hence, in order to prove (90), one only needs to prove
If and , for a function , we have
By approximating by in and sending , we have . Note that ; we obtain for all .
If and , we have orthogonal condition for any . Thus, |
5342b2090dd9f59a |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
I am to teach section 18 of "Elementary Number Theory" (Dudley) - Sums of Two Squares - to an undergraduate Number Theory class, and am having trouble cultivating anything other than a rote dissection of the lemmas/theorems presented in the text.
The professor copies (exclusively) from the text onto the chalkboard during lectures, but I would like to present the students with something a little more interesting and that they cannot find in their text.
What are the connections of the "Sums of Two Squares" to other fields of mathematics? Why would anyone care about solving $n = x^2 + y^2$ in the integers?
I am aware of the norm of the Gaussian integers, and will probably mention something about how the identity $$(a^2 + b^2)(c^2 + d^2) = (ac -bd)^2 + (ad + bc)^2$$ is deeper than just the verification process of multiplying it out (e.g. I might introduce $\mathbb{Z}[i] $ and mention that "the norm is multiplicative").
What else is there? The book mentions (but only in passing) sums of three and four squares, Waring's Problem, and Goldbach's Conjecture.
Also, I have seen Akhil's answer and the Fermat Christmas question, but these don't admit answers to my question.
share|cite|improve this question
The solutions to $x^2+y^2=n$ describe all the points in $Z^2$ which belong to the same center with the center in $(0,0)$. You can also use them to find the intersection between $Z^2$ and a circle with the centre at some lattice point.... – N. S. Mar 27 '12 at 16:22
@N.S. - I'd vote on that as an answer. – The Chaz 2.0 Mar 27 '12 at 16:24
A theorem says every nonnegative integer is the sum of four squares of nonnegative integers. It is also true that every nonnegative integer is the sum of three triangular numbers, of five pentagonal numbers, of six hexagonal numbers, etc. Maybe that has no relevance to other areas of mathematics, but if you're wondering why you would care about sums of squares, maybe the fact that it's part of this larger pattern matters. – Michael Hardy Mar 27 '12 at 16:24
It sounds from your question like the main problem here is the professor copying directly from the text onto the chalkboard. What a waste of student's time. Does the instructor explain to colleagues why his/her latest research is interesting by reading directly from the paper? Zzzzzz.... – KCd Mar 28 '12 at 1:37
Asking which integers are sums of two squares is a quintessential theme from number theory. Do the students already find the course interesting at all?? Look at the number of solutions x,y for each n and see how erratically that count behaves as n increases step by step. Some regularity appears if we think about it at primes first. This illustrates the difference between the linear ordering way of thinking about integers in many other areas of math vs. the divisibility relation among integers that is central to number theory. – KCd Mar 28 '12 at 1:44
up vote 7 down vote accepted
Consider the Laplacian $\Delta = \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}$ acting on nice functions $f : \mathbb{R}^2 \to \mathbb{C}$ which are doubly periodic in the sense that $f(x, y) = f(x+1, y) = f(x, y+1)$. There is a nice set of eigenvectors one can write down given by $$f_{a,b}(x, y) = e^{2 \pi i (ax + by)}, a, b \in \mathbb{Z}$$
with eigenvalues $-4 \pi^2 (a^2 + b^2)$, and these turn out to be all eigenvectors, so it is possible to expand a suitable class of such functions in terms of linear combinations of the above.
Eigenvectors of the Laplacian are important because they can be used to construct solutions to the wave equation, the heat equation, and the Schrödinger equation. I'll restrict myself to talking about the wave equation: in that context, eigenvectors of the Laplacian give standing waves, and the corresponding eigenvalue tells you what the frequency of the standing wave is. So eigenvalues of the Laplacian on a space tell you about the "acoustics" of a space (here the torus $\mathbb{R}^2/\mathbb{Z}^2$). For more details, see the Wikipedia article on hearing the shape of a drum. A more general keyword here is spectral geometry.
share|cite|improve this answer
By considering different periodicity conditions you can also motivate studying solutions to $n = ax^2 + bxy + cy^2$ for more general $a, b, c$, and working in higher dimensions you can motivate studying more general positive-definite quadratic forms. There is an interesting general question you can ask here about whether you can "hear the shape of a torus" (the answer turns out to be yes in two dimensions if you interpret "shape" suitably and no in general). – Qiaochu Yuan Mar 27 '12 at 16:35
Of course you don't need me to tell you this, but this is a perfect example of what I am looking for. – The Chaz 2.0 Mar 27 '12 at 16:36
@The Chaz: if you liked this then you might want to pick up a copy of Schroeder's Number theory in science and communication and look in particular at section 7.10. – Qiaochu Yuan Mar 27 '12 at 16:39
Thanks again for this answer. I left it "un-accepted" for a while to encourage more answers. – The Chaz 2.0 May 1 '12 at 2:49
At a much more elementary level, one might want to draw connections to what they already know.
For example, there is a very nice connection between the identity $(a^2 + b^2)(c^2 + d^2) = (ac -bd)^2 + (ad + bc)^2$ and the addition laws for cosine and sine.
As another example, suppose that $a$ and $b$ are positive, and we want to maximize $ax+by$ subject to $x^2+y^2=r^2$. Using $(a^2+b^2)(x^2+y^2)=(ax+by)^2+(ay-bx)^2$, we can see that the maximum of $ax+by$ is reached when $ay-bx=0$.
Then there is the generalization (Brahmagupta identity). Connection with Fibonacci numbers. Everything is connected to everything else!
share|cite|improve this answer
Could you point me in the direction of the trig identities you had in mind? Also, is there a sign error? I might just be projecting my tendency to make such errors (cf revisions of this question!) – The Chaz 2.0 Mar 27 '12 at 18:28
I was just thinking of $a=\cos x$, $b=\sin x$, $c=\cos y$, $d=\sin y$. That gives the right signs. For the max problem, I probably switched signs, doesn't matter, one can switch signs without changing correctness of identity. – André Nicolas Mar 27 '12 at 19:08
Got it. Maybe the changed signs only matter in the context of the norm in $\mathbb{Z} [i]$... – The Chaz 2.0 Mar 28 '12 at 1:56
In another direction, counting the solutions $n=ax^2+bxy+cx^2$, to quadratic forms with negative discriminant is often the starting place for a course on Algebraic Number Theory. I believe Gauss was one of the first people to think about this area. This leads to the definition of Class Number, and we can prove things like Dirichlet's Class Number Formula.
Solutions to $x^2+y^2$ is one of the simplest examples to start with.
share|cite|improve this answer
Your Answer
|
3609e362fffca07d | Tech skills: Is it getting harder to keep up?
Professional skills and experience are hard won from education, training and time in the industry. But it's amazing how many people get by despite a fundamental lack of knowledge.
Written in Philadelphia and despatched to TechRepublic at 30Mbps over an open wi-fi hub in my Pittsburgh hotel later the same day.
Almost everything I learned at college and university has been used at some time during my professional career - and I still lean on that seminal education when faced with new and challenging problems today. But by degree, the speed of change has seen many technologies and techniques sidelined by progress during my years in industry.
To probe the rate of change I recently asked an engineering class for a show of hands on a series of topics to get a feel for the knowledge evaporation rate. Out of a class of about 100 mature students the count went like this:
• Who has seen a thermionic tube? = 5
• Does anyone know how they work? =0
• Has anyone know how a cathode ray tube works? = 1
• Does anyone know how a transistor works? = 1
• Who knows how a laser works? = 3
• Who knows how an LED works? = 2
• Who knows how an LED display works? = 3
• Who understands Maxwell's equations? = 0
• Who knows how an antenna works? = 0
• Has anyone heard of the radar range equation? = 0
• Who has heard about the Schrödinger equation? = 11
• Who knows what a compiler is? = 15
I won't go on as I'm sure you get the idea. The big question is: does this lack of fundamental knowledge matter? Perhaps not. So long as someone somewhere does understand, the tech world will keep on spinning. But should the last one with knowledge die, we could quickly be in trouble.
For many of us, keeping abreast or ahead of the game is now an accelerating challenge driven by technologies that span every sector and aspect of companies and society.
We can no longer read all the R&D publications or attend all the conferences and courses to get a filtered and distilled view of progress.
Putting all these issues into some quantified context using the best practice I have come across involves the concept of knowledge half-life. The calculation methods are varied and hardly comprehensive, or indeed fully justified, but it is all we have as a guide to the challenge we now face.
The simplest technique is to reference the citation rates of scientific, technology and engineering publications. On this basis, I have to put together the following graphic for a broad selection of disciplines.
Image: Peter Cochrane/TechRepublic
The most interesting observation to make here is that the medical and marine biology students are out of date before they can even graduate, while the physicists have about 11 years of grace.
What does this tell us? The education system as it stands is no longer doing its job and can't possibly work as we move forward and the situation gets worse.
It is obvious that we have to move on, and it all has to be online and available anywhere, anytime, and in a form fit for purpose. But perhaps the biggest leap will be delegating the role of tutor to some disembodied entity - some machine - able to rapidly access, filter and format what is required well, or just, in time.
In addition, individuals will have to assume a greater responsibility for their own course of study. They will have to choose what they follow as wholly prescriptive education paths fall by the wayside. Moreover, education will be full time from cradle to grave for those in the fastest-moving sectors.
In many respects the world is demanding that students grow up and mature far faster than ever before. They will have to assume greater responsibility and achieve greater authority earlier than any previous generation, and they have to do it in concert with a world of machines and escalating complexity.
Will our young people be able to rise to this change and challenge? We'll soon see, but I certainly intend running with them and giving it go.
Editor's Picks
Free Newsletters, In your Inbox |
cd3ae9e44f6083a1 | Is it possible to calculate the energy of electron in any orbital and atom in the Schrödinger wave model theory? If so, how? E.g. energy of the $3s^2$ electron of the $\ce{Na-}$ ion.
It is surely possible. Note though, that for many-electron systems orbital picture is an approximate description, but that is an approximation on which the whole general chemistry is based on, so that is fine from that perspective. The approximation is known as the Hartree-Fock (HF) model, and thus, to get the energies of electrons occupying some orbitals all you need to do is to solve the Schrödinger equation in this approximation.
As a quick exercise one can do HF calculation in, say, Gaussian, with the following input file:
#P HF/aug-cc-pVTZ Pop=Full
-1 1
Na 0.00000000 0.00000000 0.00000000
to get this picture of occupied orbitals:
enter image description here
The HOMO with the energy of $-0.01288 \, \mathrm{Hartree} = -0.3505 \, \mathrm{eV}$ is basically the $\mathrm{3s}$ orbital you're looking for (trust me). Note, however, that this energy is approximate since the basis set (aug-cc-pVTZ) is finite. We could do better than that, but that is a different story.
In response to permeakra (since he does not trust his colleagues) I quote an authoritative reference in the field which explicitly talks about individual electrons occupying individual spin-orbitals and orbital energies. As a reference I choose the infamous book entitled "Modern Quantum Chemistry" written by Attila Szabo and Neil S. Ostlund.
Quote #1 (Szabo & Ostlund, p. 50)
This Slater determinant has $N$ electrons occupying $N$ spin orbitals $(\chi_i, \chi_j, \dotsc, \chi_k)$ without specifying which electron is in which orbital.
Quote #2 (Szabo & Ostlund, p. 54)
$$ f(i) \chi(x_i) = \varepsilon \chi(x_i) \tag{2.52} $$ [...]
The solution of the Hartree-Fock eigenvalue problem (2.52) yields a set $\{\chi_k\}$ of orthonormal Hartree-Fock spin orbitals with orbital energies $\{\varepsilon_k\}$.
Below I also quote what Szabo & Ostlund have to say about Koopmans' theorem, because I already know that won't trust me.
Quote #3 (Szabo & Ostlund, p. 110)
The first theorem [Koopmans' theorem] constitutes an interpretation of the Hartree-Fock orbital energies as ionization potentials and electron affinities.
See, orbitals with their energies & occupancies do exist in the Hartree-Fock theory. It is the interpretation of orbital energies as ionization potentials which requires an additional theorem.
• $\begingroup$ > The approximation is known as the Hartree-Fock (HF) model || HF model does NOT give energies of individual electrons, or even orbitals. HF wavefunction is antysimmetrical by electron position exchange, so there is no way to say if some particular electron occupies some orbital, only that 'orbital' is occupied or not. You are probably refering to Koopmans' theorem. It is not about energy of particular electron, but about energy required to build system with a 'hole' relatively to original one. $\endgroup$ – permeakra Jun 28 '15 at 17:48
• $\begingroup$ @permeakra, no, now I'm pretty sure, you're confused. Speaking of which particular electron occupies a particular orbital is meaningless, but speaking about the energy of an electron at particular orbital is perfectly sensible. Because an electron that occupies some orbital does have a well defined energy, no matter which particular electron it is. $\endgroup$ – Wildcat Jun 28 '15 at 18:01
• $\begingroup$ @permeakra, the fact that electrons are indistinguishable does not forbid us to speak about individual electrons. For instance, in the HF approximation we talk about individual electrons occupying spin-orbitals, and that is perfectly legal (approximate, but legal). What can not be done is some sort of identification that a particular electron (say, twelfth one) occupies a particular orbital (say, $\mathrm{3s}$). But I surely can say that an electron occupies this orbital and that it will have a particular energy. $\endgroup$ – Wildcat Jun 28 '15 at 18:10
• $\begingroup$ > For instance, in the HF approximation we talk about individual electrons occupying spin-orbitals || Nope, HF approximation has nothing to say about individual electrons or orbital energy, it never appears in the formalism. There is so known 'Koopman's theorem', which does not hold true and is about ionization potential, most likely it is the energy printed by provided input. Again, it is not about individual electrons or even electron on the orbital. Read a good QC book. $\endgroup$ – permeakra Jun 29 '15 at 3:26
• 4
$\begingroup$ It would be great if you'd be a little bit nicer to each other. Every argument can be lead on a non-personal basis. Some readers might interpret this as a hostile argument. $\endgroup$ – Martin - マーチン Jun 29 '15 at 11:57
No. In QM electrons are indistinguishable.
Still, it is possible to associate a somewhat characteristic value, using various spectroscopic methods, i.e virtually 'moving' some electron from one orbital to another (or removing it entirely) and calculating the corresponding energy. This energy, however, is unprecise, as other electrons 'feel' the move of the 'moved' one. Still, something is better than nothing, so people use what they can.
• $\begingroup$ How does indistinguishability of electrons cause inability to calculate their energies? $\endgroup$ – Wildcat Jun 28 '15 at 11:56
• 1
$\begingroup$ @Wildcat since electrons are indistinguishable, one cannot attribute energy to some particular electron, since any exchange of two electrons gives raise to antisymmetric wavefuction with exactly same energy, that is, from observer's POV indistinguishable from original (and QM does not have internal variables). If you prefer more strict description, QM formalism gives only total energy of the system. It is possible, using some approximations, to give energy to some orbitals, but not individual electrons. $\endgroup$ – permeakra Jun 28 '15 at 17:44
• $\begingroup$ What you're saying sounds wrong for me. Look, even in the HF approximation, despite what you're saying, electrons are indistinguishable. That's why our trial HF wave function is antisymmetric product of spin-orbitals and not a simple product of them. Yes, I obviously couldn't say which particular electron occupies which orbital, but I know what is the energy of an electron is at each and every orbital. $\endgroup$ – Wildcat Jun 28 '15 at 17:56
• 1
$\begingroup$ Besides, systems of many non-interacting particles are considered in QM, and for them the many-particle wave function perfectly separates into the product of one-particle wave functions without any approximations. Yes, such systems are rather ideal, but QM in principle doesn't seem to mind that we speak about each and every particle as being in its own one-particle quantum state. There is nothing wrong with that. Of course, for interacting particles such description is approximate, but that's another story. $\endgroup$ – Wildcat Jun 28 '15 at 18:17
• 3
$\begingroup$ The "energy of an electron in an orbital" is equivalent to the energy of an orbital" so electron indistinguishability is irrelevant. Orbitals are distinguishable up to symmetry. $\endgroup$ – Jeff Jun 28 '15 at 18:26
User permeakra's observation is right, but the inference is not. User Wildcat's answer is spot-on, but missed on one minor point, which I would explain at the end.
This is just a summary of Wildcat's explanation: The idea is that, we have a system comprising of indistinguishable fermions, which is why we need to use the antisymmetrized wavefunction. The LCAO-MO approximation results in a linear combination of AOs resulting in MOs, which can be allowed to occupy, given by electron densities. Variationally, the ground state would lead to the all the lowest levels being occupied, resulting in HF ground state wavefunction. The HOMO, which is $3s$ orbital, would be at a certain energy level which is definitely well defined for a given level of theory.
So, to answer your question, yes, you can calculate the energy of the $3s$ orbital for $\ce{Na^{-}}$. But, what is not possible is to calculate the energy of the $3s^{2}$ electron, because the electrons occupying the $3s$ (or any of the other) energy levels are indistinguishable. Another thing to note is that, when you talk of using a wavefunction to denote electrons, they are highly delocalized, and can occupy more than one energy level at once. Think about the case of Boron atom. Which orbital do you think the $2p$ electron occupies? The $2p_x$ or $2p_y$ or $2p_z$? They are all symmetric in nature, and hence you cannot point at one particular orbital. This turns into a multiconfigurational problem, which is beyond the scope of this discussion. Hence, the implication here is that, the electrons occupying these orbitals would be at specified energy levels, depending on which orbital they occupy.
• $\begingroup$ No, I disagree. I do not miss anything. I know & perfectly understand what you're talking about in your answer and I have never said anything different. I agree that OP used a bit strange language ("the $\mathrm{3s^2}$ electron"), but that his problem and not mine. $\endgroup$ – Wildcat Jul 2 '15 at 6:25
• $\begingroup$ So, I repeat my reasoning her point by point, and then you tell me what I'm missing. 1) We do have orbitals in the HF approximation. 2) HF orbitals are occupied by electrons. 3) I couldn't specify which particular electron occupies which orbital, but I could tell what is the energy of an electron on any orbital is. 4) This energy usually referred to as the orbital energy is also know as the one-electron energy, which clearly identifies its meaning. 5) Case closed. $\endgroup$ – Wildcat Jul 2 '15 at 6:28
• $\begingroup$ Specifically, for the answer in question, what I say is that I could not tell which particular electron out of total 6 occupies $\mathrm{3s_{\beta}}$ spin-orbital, but I can tell what is the energy of this electron. And since chemistry is ultimately based on the RHF picture, this energy will be the same as the energy of $\mathrm{3s_{\alpha}}$ electron, thus, I simply say "the energy of $\mathrm{3s}$ electron": there are two of them but they have exact same energy in the restricted picture. $\endgroup$ – Wildcat Jul 2 '15 at 6:32
• $\begingroup$ The meaning of the phrase "the energy of $\mathrm{3s_{\alpha}}$ electron" is perfectly well defined: it is the energy of an electron which occupies the $\mathrm{3s_{\alpha}}$ spin-orbital. One more time: I couldn't specify which electron it is, but whichever electrons occupies the $\mathrm{3s_{\alpha}}$ spin-orbital, I surely can tell its energy. $\endgroup$ – Wildcat Jul 2 '15 at 6:35
• $\begingroup$ And another short remark since I notice you talked about "the lowest levels being occupied": conceptually there is no virtual orbitals in the HF approximation. Initially you have as many spin-orbitals and as many HF equations as many electrons there are in your system. In the RHF case where all spatial orbitals are doubly occupied you have as many spatial orbitals and as many RHF equations rewritten in terms of them as many pairs of electrons there are in your system. $\endgroup$ – Wildcat Jul 2 '15 at 6:45
Your Answer
|
44af446f0c44ebd4 | Quantum mechanics/Quantum field theory on a violin string
From Wikiversity
Jump to navigation Jump to search
And I am not sure this calculation will lead to pedagogically useful results. The goal is to study quantum field theory in a "simple" system. As you will see, the math is far from simple.--Guy vandegrift (discusscontribs) 09:39, 8 September 2016 (UTC)
This construction of an elementary quantum field theory will also give readers a glimpse of Fourier series expansions, Hamiltonian mechanics, and also Black-body radiation. It assumes that the reader is familiar with the solution to Schrödinger equation for a quantum harmonic oscillator.
The classical theory of transverse waves on a string[edit]
The first six modes (n= 1, 2,...,6) of a taut string clamped at both ends.
We begin with the classical theory of transverse waves on a vibrating string with length, , mass , and tension, . The dispersion relation, ω=ω(k), relates frequency to wavelength:
where is the angular frequency and is the wavenumber. The boundary conditions at each of the string of length imply that the wavenumber can take on only those values that cause the length to equal integral number of half wavelengths: , where may be taken to be a positive integer (1,2,3...). Thus we have:
The speed of transverse waves is,
where is the linear mass density.
Fourier series and wave energy[edit]
In our classical wave, the transverse displacement obeys,
While it is not customary to include the factor in this Fourier series, the insertion of this factor redefines the coefficients in a way that will prove convenient for establishing that this system is equivalent to an infinite collection of simple harmonic oscillators.
The quantum mechanical version of a classical theory begins with some canonical version of the theory. We shall adopt the convention than denotes .The total kinetic energy of the wave is,
The double sum contains terms when the two indices (m,n) are equal and terms where they are not equal.
The integrals over the product of the two sine waves have simple properties because the interval of length L contains exactly an integral number of half-wavelengths (i.e., n and m are both integers). Therefore,
Comment on inner product, orthogonal functions, and these integrals
This result is easy to remember if one notes that the average of a sinusoidal over n half wavelengths equals 1/2 if n is an integer, and that the integral of a constant over a segment equals the length of that segment:
This is one of many examples in physics where a class of functions (here sine functions) obeys this orthogonality condition:
If this identity holds, and are said to be w:orthogonal functions because is known as the inner product of the functions and (if and are real).[1] Whenever such a collection of orthogonal functions is defined, the range of the integral must be specified (here it is from to .)
With this substitution we have for the kinetic energy of a vibrating string:
If the factor of had not been inserted earlier, we would have redefined our amplitude so that the wave's kinetic energy would take this intuitive form.
Potential energy in a wave[edit]
The work required to stretch a string of length to a length of is , where is the tension in the string. This work acts as a potential energy. A transverse wave with displacement has a length given by,
where we have used the approximation for small ε: (1+ε)p≈1+pε.
If y=y(x,t) represents a wave, it is customary to replace the derivative by a partial derivative:dy/dx → ∂y/∂x. Moreover, it is convenient to express the partial derivative in terms of the wavenumber described above.
where we note that is the wavenumber of the n-th mode. Using the Fourier series expansion described above we have, the potential energy of the wave is
As occurred previously with the kinetic energy this double sum becomes a single sum over all cases where m = n because the cosine functions are also orthogonal functions over this range of integration (provided n and m are integers). As before the integral because the average value of cos2 is 1/2 whenever the cosine is averaged over an integral number of half-wavelengths.
where the spring constant associated with the nth mode is
It is known for the classical wave equation for a stretched string that each mode oscillates as
and is the frequency of the lowest order standing wave in the classical vibrating string).
Quantizing the harmonic oscillators[edit]
From the known behavior of the classical violin string, we obtain equations of motion, which if cast in canonical form, will tell us how to create the quantum mechanical version of the theory. Our canonical form shall be that of Hamiltonian mechanics. Our goal is to show that the classical vibrating string is identical to an (almost?) infinite number of independent simple harmonic oscillators.
where is the conjugate momentum. The wave equation for the simple harmonic oscillator is well known. The variables play the same role as in the quantum mechanics of a single particle. Schrödinger's equation is:
The solution is of the form,, where
is the energy eigenstate for the in the energy level of potential associated with a spring constant equal to .
If you really need to see these wavefunctions, here they are:
The energy eigenstates are:[2]
The Hermite polynomials are,
Click to expand a graph of the first six (physicists') Hermite polynomials Hn(x).
From Wikipedia, the first eleven physicists' Hermite polynomials are:
1. If the functions are complex, take the complex conjugate of the second function; this ensures that <f|f> is a positive real number.
2. https://en.wikipedia.org/w/index.php?title=Quantum_harmonic_oscillator&oldid=627959280 |
36d766ddf8059310 | Thursday, January 23, 2014
Tegmark on book tour
I criticized Max Tegmark's new book, and I attended his book tour lecture in Santa Cruz.
He mainly tried to impress the audience that the history of science had two big trends: finding the universe to be bigger than expected, and finding it to be more mathematical than expected. He is taking these trends to the logical conclusion, and hypothesizing that the universe includes all imaginable possibilities, and that they are all purely mathematical.
I thought that I had an understanding of what he meant by "mathematical". But not I do not think that he has a coherent idea himself. A student asked that if the universe is reducible to math, then is math reducible to axioms, set theory, homotopy type theory, or what? He evaded the question, and did not answer it.
In response to another question, he said that he likes infinity, and mathematicians and physicists use infinity all the time, but he does not believe in it. He not only does not believe in infinite cardinals, he does not believe that the real numbers are infinitely divisible. At least not the real numbers that match up to his mathematical universe. By avoiding infinity, he says he also avoid Goedel paradoxes. (Update: See Tegmark's clarification in the comments below.)
This makes very little sense. The Goedel paradoxes occur with just finite proofs about finite natural numbers. I guess he can assume that the universe is some finite discrete automaton with only finitely many measurement values possible, but then the universe is not truly described by differential equations. All of his arguments for the universe being mathematical were based on differential equations.
Tegmark also spent a lot of time arguing that the govt should spend a lot more money trying to reduce risk of future disasters, such as funding the Union of Concerned Scientists or monitoring stray asteroids. He complained that Justin Bieber is more famous than some Russian technician who helped avert war during the Cuban missile crisis.
The trouble with this argument is that his math multiverse philosophy requires him to believe that time, randomness, probability, risk, human caring, emotion, and free will are all illusions. What seems like a choice is really determined. We might appear to be lucky when an asteroid misses the Earth, but a parallel asteroid hits a parallel Earth in a parallel universe, and someone with the same thoughts and feelings as you gets killed. The difference between you are the parallel guy who gets killed is just another illusion.
I asked him about this afterwards, and he claimed that I should care about the outcome of this universe for the same reasons that I put my clothes on in the morning. The woman next to me suggested that I read Sartre, if I wanted to blindly contemplate my own existence. No thanks, he was a Marxist kook.
I also listened to Tegmark's FQXi podcast on his new paper, Consciousness as a State of Matter, in addition to the solid, liquid, and gas states.
On another blog, Tegmark gave this experimental evidence for his ideas:
a) Observations of the cosmic microwave background by the Planck satellite etc. have make some scientists take cosmological inflation more seriously, and inflation in turn generically predicts (according to the work of Vilenkin, Linde and others) a Level I multiverse.
b) Steven Weinberg’s use of the Level II multiverse to predict dark energy with roughly the correct density before it was observed and awarded the Nobel prize has made some scientists take Level II more seriously.
c) Experimental demonstration that the collapse-free Schrödinger equation applies to ever larger quantum systems appears to have made some scientists take the Level III multiverse more seriously.
Is it really completely obvious that these people are all deluded and that none of these three developments have any bearing on your question?
I can believe that there is matter outside of our observable universe (light cone), and that maybe we will get indirect evidence for it, even tho we cannot see it. Call it another universe if you want. But beyond that, these multiverse arguments are silly. Weinberg's argument was merely an argument about how different dark energy densities could affect galaxy formation. It says nothing about any mulitiverse. (Lee Smolin gives another argument.) And those quantum experiments have no evidence against the Copenhagen interpretation, or you would hear about it.
Being a mathematician, my prejudices are toward a Pythagorean view that math explains everything. But Tegmark seems completely misguided to me. He has put himself out there before the public promoting these ideas as legitimate science, but I do not see it as either good math or good physics.
Update: Woit posted some sharper criticism:
The “Mathematical Universe Hypothesis” and Level IV multiverse of Tegmark’s book is not “controversial”. As far as I can tell, no serious scientist other than him thinks these are non-empty ideas. There is a controversy over the string theory landscape, but none here. These ideas are also not “radical”, they are content-free.
That is wishful thinking. The various multiverse ideas, such as many worlds, are increasingly popular. The only serious criticism of Tegmark, as far as I know, is my 2012 FQXi essay.
1. Roger,
Something odd is happening with your text editor or spell checker. I see your first link has "boo" instead of "book" in the link, and the word "pub" appears a couple times where I think you meant to say "put", those are just the words I caught.
1. Thanks. I guess I cared to put my clothes on, but not to check my spelling!
2. Dear Roger: I'm glad I got to meet you in person! Here's just a first quick reply from the Seattle-Boston flight before it takes off. I'm certainly *not* saying that I have problems with integeres, real numbers etc. in *mathematics*. I said that I feel we have no evidence for anything truly infinite or truly continuous in *physics* - I think you'll agree that this is a very different statement! /Max |
467a7b09601d8270 | LOG#115. Bohr’s legacy (III).
Dedicated to Niels Bohr
and his atomic model
3rd part:
From gravatoms to dark matter
We will take the values of the following fundamental constants:
From (3), we obtain
Comparing (5) with (6), we deduce that
and thus
and then
and so the spectrum of this gravatom is given by
Gravatoms and Dark Matter: a missing link
May the Bohr model and gravatoms be with you!
LOG#114. Bohr’s legacy (II).
Dedicated to Niels Bohr
and his atomic model
2nd part: Electron shells,
Quantum Mechanics
and The Periodic Table
Niels Bohr (1923) was the first to propose that the periodicity in the properties of the chemical elements might be explained by the electronic structure of the atom. In fact, his early proposals were based on his own “toy-model” (Bohr atom) for the hydrogen atom in which the electron shells were orbits at a fixed distance from the nucleus. Bohr’s original configurations would seem strange to a present-day chemist: the sulfur atom was given a shell structure of (2,4,4,6) instead of 1s^22s^22p^63s^23p^4, the right structure being (2,8,6).
The following year, E.C.Stoner incorporated the Sommerfeld’s corrections to the electron configuration rules, and thus, incorporating the third quantum number into the description of electron shells, and this correctly predicted the shell structure of sulfur to be the now celebrated sulfur shell structure (2,8,6). However neither Bohr’s system nor Stoner’s could correctly describe the changes in atomic spectra in a magnetic field (known as the Zeeman effect). We had to wait to the complete Quantum Mechanics formalist to arise in order to give a description of this atomic phenomenon an many others (like the Stark’s effect, spectrum split due to an electric field).
Bohr was well aware of all this stuff. Indeed, he had written to his friend Wolfgang Pauli to ask for his help in saving quantum theory (the system now known as “old quantum theory”). Pauli realized that the Zeeman effect could be due only to the outermost electrons of the atom, and was able to reproduce Stoner’s shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his famous exclusion principle (for fermions like the electrons theirselves) around 1925. He said:
The next step was the Schrödinger equation. Firstly published by E. Schrödinger in 1926, it gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom: his solution yields the (quantum mechanical) atomic orbitals which are shown today in textbooks of chemistry (and above). The careful study of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung’s rule (1936) for the order in which atomic orbitals are filled with electrons. The Madelung’s law is generally written as a formal sketch (picture):
Shells and subshells versus orbitals
In the picture of the atom given by Quantum Mechanics, the notion of trajectory looses its meaning. The description of electrons in atoms are given by “orbitals”. Instead of orbits, orbitals arise as the zones where the probability of finding an electron is “maximum”. The classical world seems to vanish into the quantum realm. However, the electron configuration was first conceived of under the Bohr model of the (hydrogen) atom, and it is still common to speak of shells and subshells (imagine an onion!!!) despite the advances in understanding of the quantum-mechanical nature of electrons (both, wave and particles, due to the de Broglie hypothesis). Any particle (e.g. an electron) does have wave and particle features. The de Broglie hypothesis says that to any particle with linear momentum p=mv corresponds a wave length (or de Broglie wavelength) given by
Remark: this formula can be easily generalized to the relativistic domain by a simple shift from the classical momentum to the relativistic momentum P=m\gamma v, so
\lambda =\dfrac{h\sqrt{1-\beta^2}}{mv} with \beta=v/c
An electron shell is the set of energetic allowed states that electrons may occupy which share the same principal quantum number n (the number before the letter in the orbital label), and which gives the energy of the shell (or the orbital in the language of QM). An atom’s nth electron shell can accommodate 2n^2 electrons, e.g. the first shell can accommodate 2 electrons, the second shell 8 electrons, and the third shell 18 electrons, the fourth 32, the fifth 50, the sixth 72, the seventh 92, the eighth 128, the ninth 162, the tenth 200, the eleventh 242, the twelfth 288 and so on. This sequence of “atomic numbers” is well known
In fact, I have to be more precise with the term “magic number”. Magic number (atomic or even nuclear physics), in the shell models of both atomic and nuclear structure, IS any of a series of numbers that connote stable structure.
The magic numbers for atoms are 2,10,18, 36, 54, and 86, 118, 168, 218, 290, 362,… They correspond to the total number of electrons in filled electron shells (having ns^2np^6 as electron configuration ). Electrons within a shell have very similar energies and are at similar distances from the nucleus, i.e., inert gases!
The factor of two above arises because the allowed states are doubled due to the electron spin —each atomic orbital admits up to two otherwise identical electrons with opposite spin, one with a spin +1/2 (usually noted by an up-arrow) and one with a spin −1/2 (with a down-arrow).
An atomic subshell is the set of states defined by a common secondary quantum number, also called azimutahl quantum number, ℓ, within a shell. The values ℓ = 0, 1, 2, 3 correspond to the spectroscopic values s, p, d, and f , respectively. The maximum number of electrons which can be placed in a subshell is given by 2(2ℓ + 1). This gives two electrons in an s subshell, six electrons in a p subshell, ten electrons in a d subshell and fourteen electrons in an f subshell. Therefore, subshells “close” after the addition of 2,8,10,18, 36,50,72,… electrons. That is, atomic shells close after we reach ns^2np^6, with n>1, i.e., shells close after reaching the inert gas electron configuration.
The numbers of electrons that can occupy each shell and each subshell arise from the equations of quantum mechanics,in particular the Pauli exclusion principle: no two electrons in the same atom can have the same values of the four quantum numbers stated above. The energy associated to an electron is that of its orbital. The energy of any electron configuration is often approximated as the sum of the energy of each electron, neglecting the electron-electron interactions. The configuration that corresponds to the lowest electronic energy is called the ground (a.k.a. fundamental) state.
Aufbau principle and Madelung rule
The Aufbau principle (from the German word Aufbau, “building up, construction”) was an important part of Bohr’s original concept of electron configuration. It may be stated as:
The approximate order of filling of atomic orbitals, following the sketch given above arrows from 1s to 7p. After 7p the order includes orbitals outside the range of the diagram, starting with 8s.
The principle works very well (for the ground states of the atoms) for the first 18 elements, then decreasingly well for the following 100 elements. The modern form of the Aufbau principle describes an order of orbital energies given by Madelung’s rule (also referred as the Klechkowski’s rule). This rule was first stated by Charles Janet in 1929, rediscovered by E. Madelung in 1936, and later given a theoretical justification by V.M.Klechkowski. In modern words, it states that:
A) Orbitals are filled in the order of increasing n+l.
This gives the following order for filling the orbitals:
In this list the orbitals in parentheses are not occupied in the ground state of the heaviest atom now known (circa 2013, July), the ununoctiom (Uuo), an atom with Z=118 protons in its nucleus and thus, 118 electrons in its ground state.
The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the atomic shell model. The nuclear shell model predicts the magic numbers at Z,N=2, 8, 20, 28, 50, 82, 126 (and Z,N=184 and 258 for spherical symmetry, but it does not seem to be the case for “deformed” nuclei at high values of Z and N).
Shortcomings of the Aufbau principle
The Aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements; neither of these is true (although they are approximately true enough for the principle to be useful). It considers atomic orbitals as “boxes” of fixed energy into which can be placed two electrons and no more. However the energy of an electron “in” an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no “one-electron solutions” for systems of more than one electron, only a set of many-electron solutions which cannot be calculated exactly. The fact that the Aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbital is always filled before the p-orbitals. In a hydrogenic (hydrogen-like) atoms , which only has one electron, the s-orbital and the p-orbitals of the same shell have exactly the same energy, to a very good approximation in the absence of external electromagnetic fields. (However, in a real hydrogen atom, the energy levels are slightly split by the magnetic field of the nucleus, and by the quantum electrodynamic effects like the Lamb shift).
Exceptions to Madelung’s rule
There are several more exceptions to Madelung’s rule among the heavier elements, and it is more and more difficult to resort to simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations, which are an approximate method for taking account of the effect of the other electrons on orbital energies. For the heavier elements, it is also necessary to take account of the effects of Special Relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light . In general, these relativistic effects tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals. The electron-shell configuration of elements beyond rutherfodium (Z=104) has not yet been empirically verified, but they are expected to follow Madelung’s rule without exceptions until the element Ubn (Unbinillium, Z=120). Beyond that number, there is no accepted viewpoint (see below my discussion of Pykko’s model for the extended periodic table).
from the Greeks to Mendeleiev and Seaborg
Atoms and their existence from Greeks to Mendeleiev have suffered historical evolution. In this section, I am going to give you a visual tour from the “ancient elements” until their current classifications via Periodic Tables (Mendeleiev’s being the first one!).
Some early elements and periodic tables:PT0ancientelementsFromGreeks PT0bisbisbisbisbisChineseelements2 PT0bisbisbisbisChineseElements PT0bisbisbisMendeleievsAsZeusperiodictablemonument PT0bisbisElementsknownToFirstHumans PT0bisElementsCirca1800
Just for fun, Feng Shui elements are…PT0Chinese-methaphysicsFengShuiElements
And you can also find today apps/games with elements as “key” pieces…Gamelogy! LOL…PT0elementsAndGamelogy
Turning back to Chemistry…Or Alchemy (Modern Chemistry is an evolution from Alchemy in which we take the scientific method seriously, don’t forget it!)PT0elementsInAstrology PT0theFiveElements
After the chemical revolution in the 18th and 19th century, we also have these pictures (note the evolution of the chemical elements, their geometry and classification):
PT1daltonsTable1808 PT2lavoisierList PT3oldElements PT4a3dTable PT5oldsymbolsAndElements PT6oldperiodictableOctaveLaw1865newlands PT7bayleysPeriodicTable PT8MeyerPeriodicTable PT9atomicmassesCirca1850 PT10oldelementNotations PT11mendeleievsConjectureInGerman PT11oldPeriodicTableAndPicture PT12MendeleievsVerticalPeriodicPTable PT13moreaboutMendeleievsTrick PT14mendeleievsPredictions PT15rangsperiodicTable PT16metalloidsVersusMetals PT17oldelementsAnotherPicture PT18mendeleievspredictionsAndtheircontext PT19periodictableAndPeriodicFeaturesofchemicalelements
Some interesting pictures about “new tables” and geometries of some periodic tables and its “make-up” process:
PT20spiralperiodictable PT21schaltenbrandsperiodictable PT22mayanperiodictable
The following one is just for fun (XD): PT23periodictableGeekTVseriesAndMovies PT24afun3dPeriodicTableModel PT25ellipticalperiodictable PT26periodictableAnotherVariationincludingsuperactinides PT27periodictableCylinder PT28spherePeriodicTableDream PT29infinitePeriodicTable PT30periodicTableAndElectronShells PT31otherPeriodicTableGeometry PT31periodictableArch PT32stowePeriodicTable PT33lavoisierCompleteList
Extended periodic tables
and the island of stability
Seaborg conjectured that the 8th period elements were an interesting “laboratory” to test quantum mechanical and physical principles from relativity and quantum physics. He claimed that there could be possible that around some (high) values of Z, N (122, 126 in Z, and about 184 in N), some superheavy elements could be stable enough to be produced. This topic is yet controversial by the same reasons I mentioned in the previous post: finite size of the nucleus, relativistic effects make the nuclei to be deformed, and likely, some novel effects related to nonpertubative issues (like pair creation in strong fields, as Greiner et al. have remarked) should be taken into account. Anyway, the existence of the so-called island of stability is a hot topic in both theoretical chemistry and experimental chemistry (at the level of the synthesis of superheavy elements). It is also relevant for (quantum and relativistic) physics. However, we will have to wait to be able to find those elements in laboratories or even in the outer space!
Some extended periodic tables were proposed by theoretical chemists like Seaborg and many others:
PT34islandofStabilitySeaborgHypothesis PT35galacticPeriodicTable PT36circularExtendedPtable PT37extendedPeriodicTableSeaborgStabilityIsland PT41extPtableWithHblock PT42extendedPtable
Pykko’s model and beyond
The finnish chemist Pekka Pykko has produced a beautiful modern extended periodic table from his numerical calculations. He has discovered that the Madelung’s law is modified and then, the likely correct superheavy element included Periodic Table should be something like this (with Z less or equal than 172):
PT38pykkosPtable1 PT39pykkosTable2
You can visit P. Pykko homepage’s here http://www.chem.helsinki.fi/~pyykko/I urge to do it. He has really cool materials! The abstract of his periodic table paper deserves to be inserted here:
PTpykkopaperAbstractand some of his interesting results from it are the modified electron configurations with respect to the normal Madelung’s rule (as I remarked above):
PTextraPykkoGoodElConfUntilE140 PTextraPykkoGoodElConfUntilE149 PTextraPykkoGoodElConfUntilE168
Indeed, Pykko is able to calculate some “simple” and “stable” molecules made of superheavy elements!
It is interesting to compare Pykko’s table with other extended periodic tables out there, like this one:
His extended periodic paper can be dowloadad here
and you can also watch a periodic table video by the most famous chemist in youtube talking about it here
We have already seen about the feynmanium in the last paper, but what is its electron configuration? It is not clear since we have up most theoretical predictions since NO atoms from E137 have been produced yet. Thus, Feynmanium’s electron configuration is assumed to be \left[Ms\right] 5g^{17}8s^2, but due to smearing of the orbitals due to the small separation between the orbitals, the electron configuration is believed to be \left[Ms\right] 5g^{11}6f^{3}7d^18s^28p^2. The hyperphysics web page also discusses this problem. It says:
“(…)Dirac showed that there are no stable electron orbits for more than 137 electrons, therefore the last chemical element on the periodic table will be untriseptium (137Uts) also known informally as feynmanium _{137}Fy. It’s full electron configuration would be something like …
1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 7s2 5f14 6d10 7p6 8s2 5g17
or is it …
1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 7s2 5f14 6d10 7p6 8s1 5g18 ?(…)”
What is the right electron configuration? Without a synthesized element, we do not know…
Even more, you can have fun with this page and references therein http://planetstar.wikia.com/wiki/Feynmanium
There, you can even find that there are proposals for almost every superheavy element (SHE) name! Let me remark that today, circa 2013, 10th July, we have named every chemical element till Z=112 (Copernicium), plus Z=114 (Flerovium) and Z=116 (Livermorium) “offitially”. Feynmanium, neutronium, and any other superheavy element name is not offitial. The IUPAC recommends to use a systematic name until the discoverers have proposed the name and it is “offitially” accepted. Thus, feynmanium should be called untriseptium until we can produce it!
More Periodic Table limits? What about a 0th element with Z=0? Sometimes it is called “neutronium” or “neutrium”. More details here
Of course it is an speculative idea or concept. Indeed, in japanese culture, the void is the 5th element! It is closer to the picture we get from particle physics today in which “elementary particles” are excitations from some vacuum for certain (spinorial, scalar, tensor,…) field. We could see the “voidium” (no, it is no the dalekenium! LOL) as the fundamental “element” for particle physics. And yet, we have that only a 5% of the known Universe are “radiation” and “known elements”. What a shock!
PT43knownElementsAndItsWeightInOurCurrentCosmodels PT44quintessenceElementsAndCosmicDestiny PT45finalComparisonInBasicElementsPastAndNow
Just for fun, again, the anime Saint Seiya Omega uses 7 fundamental “elements” (yes, I am a geek, I recognize it!)PT46saintSeiyaOmegaElements
The Seaborg’s original proposal was something like the next table:PTextra2+Superactinides
And you see, it is quite a different from the astrological first elements from myths and superstitions:PTextra3fengshuiElements PTExtra4ElementsAngGeometricalForms PTExtra5ElementsAndSpiritAnd finally, let me show you the presently known elementary particles again, the smallest “elements” from which matter is believed to made of (till now, of course):
modeloestandar2012Remark: Chemistry is about atoms. High Energy Physics is about elementary particles.
Final questions:
1st. What is your favorite (theoretical or known to exist) chemical element?
2nd. What is your favorite elementary particle (theoretical or known to exist in the Standard Model)?
May The Chemical Elements and the Elementary Particles be with YOU!
LOG#113. Bohr’s legacy (I).
Dedicated to Niels Bohr
and his atomic model
1st part: A centenary model
I wish you will enjoy the next (short) thread…
Atomic mysteries
Dalton’s atoms or Dalton atomic model was very simple.
1st. Atoms are mostly vacuum space.
3rd. Nuclei had positive charge, and electrons negative charge.
Bohr model for the hydrogen atom
Bohr model hypotheses/postulates:
In summary, we have:
From this quantization rule (2), we can easily get
Thus, we have
and where we have defined the Rydberg (constant) as
we can deduce that the Rydberg corresponds to a wavenumber
or a frequency
and a wavelength
Please, check it yourself! :D.
Hydrogenic atoms
(and positronium, muonium,…)
and inserting the quantized values of the orbit radius
so, for the Bohr atom (hydrogen)
The feynmanium
v_1=Z\alpha c
HydrogenAtomSpectrumDiracEquationFirstLevelsor equivalently (I add comments from the slides)
1) Is there an ultimate element?
2) Is there a theory of everything (TOE)?
3) Is there an ultimate chemical element?
4) Is there a single “ultimate” principle?
5) How many elements does the Periodic Table have?
6) Is the feynmanium the last element?
13) Will we find superheavy elements the next decade?
14) Will we find superheavy elements this century?
16) Did you like/enjoy this post?
19) What is your favourite chemical element?
|
7f78e0d9babcf83f | A glass tube containing a glowing green electron beam
Experiments with a Crookes tube first demonstrated the particle nature of electrons. In this illustration, the profile of the cross-shaped target is projected against the tube face at right by a beam of electrons.[1]
The electron is a subatomic particle which has the symbol e
and a negative electric charge of 1 elementary charge. It has no known components or substructure. Therefore, the electron is generally thought to be an elementary particle.[2] An electron has a mass that is approximately 1/1836 that of the proton[8]. The intrinsic angular momentum (spin) of the electron is a half-integer value in units of ħ, which means that it is a fermion. The antiparticle of the electron is called the positron. The positron is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles may either scatter off each other or be totally annihilated, producing a pair (or more) of gamma ray photons. Electrons, which belong to the first generation of the lepton particle family,[9] participate in gravitational, electromagnetic and weak interactions.[10] Electrons, like all matter, have quantum mechanical properties of both particles and waves, so they can collide with other particles and be diffracted like light. However, this duality is best demonstrated in experiments with electrons, due to their tiny mass. Since an electron is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle.[9]
The concept of an indivisible quantity of electric charge was theorized to explain the chemical properties of atoms, beginning in 1838 by British natural philosopher Richard Laming;[4] the name electron was introduced for this charge in 1894 by Irish physicist George Johnstone Stoney. The electron was identified as a particle in 1897 by J. J. Thomson and his team of British physicists.[6][11][12]
In many physical phenomena, such as electricity, magnetism, and thermal conductivity, electrons play an essential role. An electron in motion relative to an observer generates a magnetic field, and will be deflected by external magnetic fields. When an electron is accelerated, it can absorb or radiate energy in the form of photons. Electrons, together with atomic nuclei made of protons and neutrons, make up atoms. However, electrons contribute less than 0.06% to an atom’s total mass. The attractive Coulomb force between an electron and a proton causes electrons to be bound into atoms. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[13]
According to theory, most electrons in the universe were created in the big bang, but they may also be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. Electrons may be destroyed through annihilation with positrons, and may be absorbed during nucleosynthesis in stars. Laboratory instruments are capable of containing and observing individual electrons as well as electron plasma, whereas dedicated telescopes can detect electron plasma in outer space. Electrons have many applications, including welding, cathode ray tubes, electron microscopes, radiation therapy, lasers and particle accelerators.
The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Apart from lightning, this phenomenon is humanity’s earliest recorded experience with electricity.[14] In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, to refer to this property of attracting small objects after being rubbed.[15] Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word ήλεκτρον (ēlektron) for amber.
In 1737 C. F. du Fay and Hawksbee independently discovered what they believed to be two kinds of frictional electricity; one generated from rubbing glass, the other from rubbing resin. From this, Du Fay theorized that electricity consists of two electrical fluids, “vitreous” and “resinous”, that are separated by friction and that neutralize each other when combined.[16] A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but the same electrical fluid under different pressures. He gave them the modern charge nomenclature of positive and negative respectively.[17] Franklin thought that the charge carrier was positive.[18]
Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges.[3] Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a “single definite quantity of electricity”, the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday’s laws of electrolysis.[19] However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which “behaves like atoms of electricity”.[4]
In 1894, Stoney coined the term electron to describe these elementary charges, saying, “… an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron“.[20] The word electron is a combination of the word electric and the suffix on, with the latter now used to designate a subatomic particle, such as a proton or neutron.[21][22]
A round glass vacuum tube with a glowing circular beam inside
A beam of electrons deflected in a circle by a magnetic field[23]
The German physicist Johann Wilhelm Hittorf undertook the study of electrical conductivity in rarefied gases. In 1869, he discovered a glow emitted from the cathode that increased in size with decrease in gas pressure. In 1876, the German physicist Eugen Goldstein showed that the rays from this glow cast a shadow, and he dubbed the rays cathode rays.[24] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[25] He then showed that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[26][27] In 1879, he proposed that these properties could be explained by what he termed ‘radiant matter’. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode.[28]
In 1896, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson,[11] performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[6] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called “corpuscles,” had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[6][12] He showed that their charge to mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[6][30] The name electron was again proposed for these particles by the Irish physicist George F. Fitzgerald, and the name has since gained universal acceptance.[26]
The electron’s charge was more carefully measured by the American physicist Robert Millikan in his oil-drop experiment of 1909, the results of which he published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson’s team,[6] using clouds of charged water droplets generated by electrolysis,[11] and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913.[35] However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.[36]
Around the beginning of the twentieth century, it was found that under certain conditions a fast moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber, allowing the tracks of charged particles, such as fast-moving electrons, to be photographed.[37]
Atomic theory
Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[40] Later, in 1923, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[41] In 1919, the American chemist Irving Langmuir elaborated on the Lewis’ static model of the atom and suggested that all electrons were distributed in successive “concentric (nearly) spherical shells, all of equal thickness”.[42] The shells were, in turn, divided by him in a number of cells each containing one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[41] which were known to largely repeat themselves according to the periodic law.[43]
Quantum mechanics
A symmetrical blue cloud that decreases in intensity from the center outward
In quantum mechanics, the behavior of an electron in an atom is described by an orbital, which is a probability distribution rather than an orbit. In the figure, the shading indicates the relative probability to “find” the electron, having the energy corresponding to the given quantum numbers, at that point.
The success of de Broglie’s prediction led to the publication, by Erwin Schrödinger in 1926, of the Schrödinger equation that successfully describes how electron waves propagated.[50] Rather than yielding a solution that determines the location of an electron over time, this wave equation can be used to predict the probability of finding an electron near a position. This approach was later called quantum mechanics, which provided an extremely close derivation to the energy states of an electron in a hydrogen atom.[51] Once spin and the interaction between multiple electrons were considered, quantum mechanics allowed the configuration of electrons in atoms with higher atomic numbers than hydrogen to be successfully predicted.[52]
In 1928, building on Wolfgang Pauli’s work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[53] In order to resolve some problems within his relativistic equation, in 1930 Dirac developed a model of the vacuum as an infinite sea of particles having negative energy, which was dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[54] This particle was discovered in 1932 by Carl D. Anderson, who proposed calling standard electrons negatrons, and using electron as a generic term to describe both the positively and negatively charged variants. This usage of the term ‘negatron’ is still occasionally encountered today, and it may be shortened to ‘negaton’.[55][56]
In 1947 Willis Lamb, working in collaboration with graduate student Robert Rutherford, found that certain quantum states of hydrogen atom, which should have the same energy, were shifted in relation to each other, the difference being the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac’s theory. This small difference was later called anomalous magnetic dipole moment of the electron. To resolve these issues, a refined theory called quantum electrodynamics was developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard P. Feynman in the late 1940s.[57]
Particle accelerators
With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[60] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[61] The Large Electron-Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[62][63]
Fundamental properties
The invariant mass of an electron is approximately 9.109×10−31 kilogram,[66] or 5.489×10−4 atomic mass unit. On the basis of Einstein‘s principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[8][67] Astronomical measurements show that the proton-to-electron mass ratio has held the same value for at least half the age of the universe, as is predicted by the Standard Model.[68]
Electrons have an electric charge of −1.602×10−19 coulomb,[66] which is used as a standard unit of charge for subatomic particles. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[69] As the symbol e is used for the elementary charge, the electron is commonly symbolized by e
, where the minus sign indicates the negative charge. The positron is symbolized by e+
because it has the same properties as the electron but with a positive rather than negative charge.[66][65]
The electron has an intrinsic angular momentum or spin of 12.[66] This property is usually stated by referring to the electron as a spin-12 particle.[65] For such particles the spin magnitude is 32 ħ.[note 3] while the result of the measurement of a projection of the spin on any axis can only be ±ħ2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[66] It is approximately equal to one Bohr magneton,[70][note 4] which is a physical constant equal to 9.27400915(23)×10−24 joules per tesla.[66] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[71]
The electron has no known substructure.[2][72] Hence, it is defined or assumed to be a point particle with a point charge and no spatial extent.[9] Observation of a single electron in a Penning trap shows the upper limit of the particle’s radius is 10−22 meters.[73] There is a physical constant called the “classical electron radius“, with the much larger value of 2.8179×10−15 m. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[74][note 5]
There are elementary particles that spontaneously decay into less massive particles. An example is the muon, which decays into an electron, a neutrino and an antineutrino, with a mean lifetime of 2.2×10−6 seconds. However, the electron is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation.[75] The experimental lower bound for the electron’s mean lifetime is 4.6×1026 years, at a 90% confidence level.[76]
Quantum properties
Virtual particles
Physicists believe that empty space may be continually creating pairs of virtual particles, such as a positron and electron, which rapidly annihilate each other shortly thereafter.[78] The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be “borrowed” from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħ6.6×10−16 eV·s. Thus, for a virtual electron, Δt is at most 1.3×10−21 s.[79]
While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[80][81] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[82] Virtual particles cause a comparable shielding effect for the mass of the electron.[83]
The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[70][84] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[85]
In classical physics, the angular momentum and magnetic moment of an object depend upon its physical dimensions. Hence, the concept of a dimensionless electron possessing these properties might seem inconsistent. The apparent paradox can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung),[86] which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron.[9][87] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[80]
An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force is determined by Coulomb’s inverse square law.[88] When an electron is in motion, it generates a magnetic field.[89] The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. It is this property of induction which supplies the magnetic field that drives an electric motor.[90] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle’s speed is close to that of light (relativistic).
A graph with arcs showing the motion of charged particles
In quantum electrodynamics the electromagnetic interaction between particles is mediated by photons. An isolated electron that is not undergoing acceleration is unable to emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. It is this exchange of virtual photons that, for example, generates the Coulomb force.[94] Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation.[95]
In the theory of electroweak interaction, the left-handed component of electron’s wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a W and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a Z0
Atoms and molecules
An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of several electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus’ electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principal each orbital can be occupied by up to two electrons, which must differ in their spin quantum number.
Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[103] Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[104] In order to escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom’s ionization energy is absorbed by the electron.[105]
The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[107] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[13] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[108] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. On the contrary, in non-bonded pairs electrons are distributed in a large volume around nuclei.[109]
Four bolts of lightning strike the ground
A lightning discharge consists primarily of a flow of electrons.[110] The electric potential needed for lightning may be generated by a triboelectric effect.[111][112]
Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasi-particles, which have the same electrical charge, spin and magnetic moment as real electrons but may have a different mass.[114] When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell’s equations.[115]
Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann-Franz law,[117] which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electrical current.[120]
Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into two other quasiparticles: spinons and holons.[123][124] The former carries spin and magnetic moment, while the latter electrical charge.
Motion and energy
According to Einstein’s theory of special relativity, as an electron’s speed approaches the speed of light, from an observer’s point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer’s frame of reference. The speed of an electron can approach, but never reach, the speed of light in a vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.[125]
The plot starts at zero and curves sharply upward toward the right
The effects of special relativity are based on a quantity known as the Lorentz factor, defined as \scriptstyle\gamma=1/ \sqrt{ 1-{v^2}/{c^2} } where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is:
\displaystyle K_e = (\gamma - 1)m_e c^2,
γ + γe+
+ e
For reasons that remain uncertain, during the process of leptogenesis there was an excess in the number of electrons over positrons.[130] Hence, about one electron in every billion survived the annihilation process. This excess matched the excess of protons over anti-protons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[131][132] The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[133] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,
np + e
+ ν
A branching tree representing the particle production
An extended air shower generated by an energetic cosmic ray striking the Earth’s atmosphere
At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole.[138] According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, it is believed that quantum mechanical effects may allow Hawking radiation to be emitted at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.
Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×1020 eV have been recorded.[141] When these particles collide with nucleons in the Earth‘s atmosphere, a shower of particles is generated, including pions.[142] More than half of the cosmic radiation observed from the Earth’s surface consists of muons. The particle called a muon is a lepton which is produced in the upper atmosphere by the decay of a pion.
+ ν
+ ν
e + ν
The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it will absorb or emit photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct absorption lines will appear in the spectrum of transmitted radiation. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. Spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.[146][147]
The first video images of an electron’s energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron’s motion to be observed for the first time.[150][151]
Plasma applications
Particle beams
Electron beams are used in welding,[154] which allows energy densities up to 107 W·cm−2 across a narrow focus diameter of 0.1–1.3 mm and usually does not require a filler material. This welding technique must be performed in a vacuum, so that the electron beam does not interact with the gas prior to reaching the target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[155][156]
Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[159] In radiation therapy, electron beams are generated by linear accelerators for treatment of superficial tumors. Because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV, electron therapy is useful for treating skin lesions such as basal cell carcinomas. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.[160][161]
Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. As these particles pass through magnetic fields, they emit synchrotron radiation. The intensity of this radiation is spin dependent, which causes polarization of the electron beam—a process known as the Sokolov–Ternov effect.[note 8] The polarized electron beams can be useful for various experiments. Synchrotron radiation can also be used for cooling the electron beams, which reduces the momentum spread of the particles. Once the particles have accelerated to the required energies, separate electron and positron beams are brought into collision. The resulting energy emissions are observed with particle detectors and are studied in particle physics.[162]
Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons, then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV.[163] The reflection high energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.[164][165]
The electron microscope directs a focused beam of electrons at a specimen. As the beam interacts with the material, some electrons change their properties, such as movement direction, angle, relative phase and energy. By recording these changes in the electron beam, microscopists can produce atomically resolved image of the material.[166] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[167] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[168] The Transmission Electron Aberration-corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[169] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.
There are two main types of electron microscopes: transmission and scanning. Transmission electron microscopes function in a manner similar to overhead projector, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. In scanning electron microscopes, the image is produced by rastering a finely focused electron beam, as in a TV set, across the studied sample. The magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.[170][171][172]
In the free electron laser (FEL), a relativistic electron beam is passed through a pair of undulators containing arrays of dipole magnets, whose fields are oriented in alternating directions. The electrons emit synchrotron radiation, which, in turn, coherently interacts with the same electrons. This leads to the strong amplification of the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices can be used in the future for manufacturing, communication and various medical applications, such as soft tissue surgery.[173]
See also
1. ^ The fractional version’s denominator is the inverse of the decimal value (along with its relative standard uncertainty of 4.2×10−13 u).
2. ^ The electron’s charge is the negative of elementary charge, which has a positive value for the proton.
\begin{alignat}{2} S & = \sqrt{s(s + 1)} \cdot \frac{h}{2\pi} \\ & = \frac{\sqrt{3}}{2} \hbar \\ \end{alignat}
for quantum number s = 12.
See: Gupta, M.C. (2001). Atomic and Molecular Spectroscopy. New Age Publishers. p. 81. ISBN 81-224-1300-5. http://books.google.com/?id=0tIA1M6DiQIC&pg=PA81.
4. ^Bohr magneton:
5. ^ The classical electron radius is derived as follows. Assume that the electron’s charge is spread uniformly throughout a spherical volume. Since one part of the sphere would repel the other parts, the sphere contains electrostatic potential energy. This energy is assumed to equal the electron’s rest energy, defined by special relativity (E = mc2).
E_{\mathrm p} = \frac{e^2}{8\pi \varepsilon_0 r},
\textstyle E_{\mathrm p} = m_0 c^2,
See: Haken, H.; Wolf, H.C.; Brewer, W.D. (2005). The Physics of Atoms and Quanta: Introduction to Experiments and Theory. Springer. p. 70. ISBN 3-540-67274-5. http://books.google.com/?id=SPrAMy8glocC&pg=PA70.
\textstyle \Delta \lambda = \frac{h}{m_ec} (1 - \cos \theta),
1. ^ Dahl, P.F. (1997). Flash of the Cathode Rays: A History of J J Thomson’s Electron. CRC Press. p. 72. ISBN 0-7503-0453-7. http://books.google.com/?id=xUzaWGocMdMC&printsec=frontcover.
2. ^ a b c Eichten, E.J.; Peskin, M.E.; Peskin, M. (1983). “New Tests for Quark and Lepton Substructure”. Physical Review Letters 50 (11): 811–814. Bibcode 1983PhRvL..50..811E. doi:10.1103/PhysRevLett.50.811.
3. ^ a b Farrar, W.V. (1969). “Richard Laming and the Coal-Gas Industry, with His Views on the Structure of Matter”. Annals of Science 25: 243–254. doi:10.1080/00033796900200141.
4. ^ a b c Arabatzis, T. (2006). Representing Electrons: A Biographical Approach to Theoretical Entities. University of Chicago Press. pp. 70–74. ISBN 0-226-02421-0. http://books.google.com/?id=rZHT-chpLmAC&pg=PA70.
5. ^ Buchwald, J.Z.; Warwick, A. (2001). Histories of the Electron: The Birth of Microphysics. MIT Press. pp. 195–203. ISBN 0-262-52424-4. http://books.google.com/?id=1yqqhlIdCOoC&pg=PA195.
6. ^ a b c d e f Thomson, J.J. (1897). “Cathode Rays”. Philosophical Magazine 44: 293. http://web.lemoyne.edu/~GIUNTA/thomson1897.html.
7. ^ a b c d e P.J. Mohr, B.N. Taylor, and D.B. Newell (2011), “The 2010 CODATA Recommended Values of the Fundamental Physical Constants” (Web Version 6.0). This database was developed by J. Baker, M. Douma, and S. Kotochigova. Available: http://physics.nist.gov/constants [Thursday, 02-Jun-2011 21:00:12 EDT]. National Institute of Standards and Technology, Gaithersburg, MD 20899.
8. ^ a b “CODATA value: proton-electron mass ratio”. National Institute of Standards and Technology. http://physics.nist.gov/cgi-bin/cuu/Value?mpsme. Retrieved 2009-07-18.
9. ^ a b c d Curtis, L.J. (2003). Atomic Structure and Lifetimes: A Conceptual Approach. Cambridge University Press. p. 74. ISBN 0-521-53635-9. http://books.google.com/?id=KmwCsuvxClAC&pg=PA74.
10. ^ Anastopoulos, C. (2008). Particle Or Wave: The Evolution of the Concept of Matter in Modern Physics. Princeton University Press. pp. 236–237. ISBN 0-691-13512-6. http://books.google.com/?id=rDEvQZhpltEC&pg=PA236.
11. ^ a b c Dahl (1997:122–185).
12. ^ a b Wilson, R. (1997). Astronomy Through the Ages: The Story of the Human Attempt to Understand the Universe. CRC Press. p. 138. ISBN 0-7484-0748-0. http://books.google.com/?id=AoiJ3hA8bQ8C&pg=PA138.
13. ^ a b Pauling, L.C. (1960). The Nature of the Chemical Bond and the Structure of Molecules and Crystals: an introduction to modern structural chemistry (3rd ed.). Cornell University Press. pp. 4–10. ISBN 0-8014-0333-2. http://books.google.com/?id=L-1K9HmKmUUC.
15. ^ Baigrie, B. (2006). Electricity and Magnetism: A Historical Perspective. Greenwood Press. pp. 7–8. ISBN 0-313-33358-0. http://books.google.com/?id=3XEc5xkWxi4C&pg=PA7.
16. ^ Keithley, J.F. (1999). The Story of Electrical and Magnetic Measurements: From 500 B.C. to the 1940s. IEEE Press. ISBN 0-7803-1193-0. http://books.google.com/?id=uwgNAtqSHuQC&pg=PA207.
17. ^ “Benjamin Franklin (1706–1790)”. Eric Weisstein’s World of Biography. Wolfram Research. http://scienceworld.wolfram.com/biography/FranklinBenjamin.html. Retrieved 2010-12-16.
18. ^ Myers, R.L. (2006). The Basics of Physics. Greenwood Publishing Group. p. 242. ISBN 0-313-32857-9. http://books.google.com/books?id=KnynjL44pI4C&pg=PA242.
19. ^ Barrow, J.D. (1983). “Natural Units Before Planck”. Quarterly Journal of the Royal Astronomical Society 24: 24–26. Bibcode 1983QJRAS..24…24B.
20. ^ Stoney, G.J. (1894). “Of the “Electron,” or Atom of Electricity”. Philosophical Magazine 38 (5): 418–420.
22. ^ Guralnik, D.B. ed. (1970). Webster’s New World Dictionary. Prentice-Hall. p. 450.
23. ^ Born, M.; Blin-Stoyle, R.J.; Radcliffe, J.M. (1989). Atomic Physics. Courier Dover. p. 26. ISBN 0-486-65984-4. http://books.google.com/?id=NmM-KujxMtoC&pg=PA26.
24. ^ Dahl (1997:55–58).
25. ^ DeKosky, R.K. (1983). “William Crookes and the quest for absolute vacuum in the 1870s”. Annals of Science 40 (1): 1–18. doi:10.1080/00033798300200101.
26. ^ a b c Leicester, H.M. (1971). The Historical Background of Chemistry. Courier Dover Publications. pp. 221–222. ISBN 0-486-61053-5. http://books.google.com/?id=aJZVQnqcwv4C&pg=PA221.
27. ^ Dahl (1997:64–78).
28. ^ Zeeman, P. (1907). “Sir William Crookes, F.R.S.”. Nature 77 (1984): 1–3. Bibcode 1907Natur..77….1C. doi:10.1038/077001a0. http://books.google.com/?id=UtYRAAAAYAAJ.
29. ^ Dahl (1997:99).
30. ^ Thomson, J.J. (1906). “Nobel Lecture: Carriers of Negative Electricity”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1906/thomson-lecture.pdf. Retrieved 2008-08-25.
31. ^ Trenn, T.J. (1976). “Rutherford on the Alpha-Beta-Gamma Classification of Radioactive Rays”. Isis 67 (1): 61–75. doi:10.1086/351545. JSTOR 231134.
32. ^ Becquerel, H. (1900). “Déviation du Rayonnement du Radium dans un Champ Électrique”. Comptes Rendus de l’Académie des Sciences 130: 809–815. (French)
33. ^ Buchwald and Warwick (2001:90–91).
34. ^ Myers, W.G. (1976). “Becquerel’s Discovery of Radioactivity in 1896”. Journal of Nuclear Medicine 17 (7): 579–582. PMID 775027. http://jnm.snmjournals.org/cgi/content/abstract/17/7/579.
35. ^ Kikoin, I.K.; Sominskiĭ, I.S. (1961). “Abram Fedorovich Ioffe (on his eightieth birthday)”. Soviet Physics Uspekhi 3: 798–809. Bibcode 1961SvPhU…3..798K. doi:10.1070/PU1961v003n05ABEH005812. Original publication in Russian: Кикоин, И.К.; Соминский, М.С. (1960). “Академик А.Ф. Иоффе”. Успехи Физических Наук 72 (10): 303–321. http://ufn.ru/ufn60/ufn60_10/Russian/r6010e.pdf.
36. ^ Millikan, R.A. (1911). “The Isolation of an Ion, a Precision Measurement of its Charge, and the Correction of Stokes’ Law”. Physical Review 32 (2): 349–397. Bibcode 1911PhRvI..32..349M. doi:10.1103/PhysRevSeriesI.32.349.
37. ^ Das Gupta, N.N.; Ghosh, S.K. (1999). “A Report on the Wilson Cloud Chamber and Its Applications in Physics”. Reviews of Modern Physics 18: 225–290. Bibcode 1946RvMP…18..225G. doi:10.1103/RevModPhys.18.225.
38. ^ a b c Smirnov, B.M. (2003). Physics of Atoms and Ions. Springer. pp. 14–21. ISBN 0-387-95550-X. http://books.google.com/?id=I1O8WYOcUscC&pg=PA14.
39. ^ Bohr, N. (1922). “Nobel Lecture: The Structure of the Atom”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1922/bohr-lecture.pdf. Retrieved 2008-12-03.
40. ^ Lewis, G.N. (1916). “The Atom and the Molecule”. Journal of the American Chemical Society 38 (4): 762–786. doi:10.1021/ja02261a002.
41. ^ a b Arabatzis, T.; Gavroglu, K. (1997). “The chemists’ electron”. European Journal of Physics 18: 150–163. Bibcode 1997EJPh…18..150A. doi:10.1088/0143-0807/18/3/005.
42. ^ Langmuir, I. (1919). “The Arrangement of Electrons in Atoms and Molecules”. Journal of the American Chemical Society 41 (6): 868–934. doi:10.1021/ja02227a002.
43. ^ Scerri, E.R. (2007). The Periodic Table. Oxford University Press. pp. 205–226. ISBN 0-19-530573-6. http://books.google.com/?id=SNRdGWCGt1UC&pg=PA205.
44. ^ Massimi, M. (2005). Pauli’s Exclusion Principle, The Origin and Validation of a Scientific Principle. Cambridge University Press. pp. 7–8. ISBN 0-521-83911-4. http://books.google.com/?id=YS91Gsbd13cC&pg=PA7.
45. ^ Uhlenbeck, G.E.; Goudsmith, S. (1925). “Ersetzung der Hypothese vom unmechanischen Zwang durch eine Forderung bezüglich des inneren Verhaltens jedes einzelnen Elektrons”. Die Naturwissenschaften 13 (47). Bibcode 1925NW…..13..953E. doi:10.1007/BF01558878. (German)
46. ^ Pauli, W. (1923). “Über die Gesetzmäßigkeiten des anomalen Zeemaneffektes”. Zeitschrift für Physik 16 (1): 155–164. Bibcode 1923ZPhy…16..155P. doi:10.1007/BF01327386. (German)
47. ^ a b de Broglie, L. (1929). “Nobel Lecture: The Wave Nature of the Electron”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1929/broglie-lecture.pdf. Retrieved 2008-08-30.
48. ^ Falkenburg, B. (2007). Particle Metaphysics: A Critical Account of Subatomic Reality. Springer. p. 85. ISBN 3-540-33731-8. http://books.google.com/?id=EbOz5I9RNrYC&pg=PA85.
49. ^ Davisson, C. (1937). “Nobel Lecture: The Discovery of Electron Waves”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1937/davisson-lecture.pdf. Retrieved 2008-08-30.
50. ^ Schrödinger, E. (1926). “Quantisierung als Eigenwertproblem”. Annalen der Physik 385 (13): 437–490. Bibcode 1926AnP…385..437S. doi:10.1002/andp.19263851302. (German)
51. ^ Rigden, J.S. (2003). Hydrogen. Harvard University Press. pp. 59–86. ISBN 0-674-01252-6. http://books.google.com/?id=FhFxn_lUvz0C&pg=PT66.
52. ^ Reed, B.C. (2007). Quantum Mechanics. Jones & Bartlett Publishers. pp. 275–350. ISBN 0-7637-4451-4. http://books.google.com/?id=4sluccbpwjsC&pg=PA275.
53. ^ Dirac, P.A.M. (1928). “The Quantum Theory of the Electron”. Proceedings of the Royal Society of London A 117 (778): 610–624. Bibcode 1928RSPSA.117..610D. doi:10.1098/rspa.1928.0023.
54. ^ Dirac, P.A.M. (1933). “Nobel Lecture: Theory of Electrons and Positrons”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1933/dirac-lecture.pdf. Retrieved 2008-11-01.
55. ^ Kragh, H. (2002). Quantum Generations: A History of Physics in the Twentieth Century. Princeton University Press. p. 132. ISBN 0-691-09552-3. http://books.google.com/?id=ELrFDIldlawC&pg=PA132.
56. ^ Gaynor, F. (1950). Concise Encyclopedia of Atomic Energy. The Philosophical Library. p. 117.
57. ^ “The Nobel Prize in Physics 1965”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1965/. Retrieved 2008-11-04.
58. ^ Panofsky, W.K.H. (1997). “The Evolution of Particle Accelerators & Colliders”. Beam Line (Stanford University) 27 (1): 36–44. http://www.slac.stanford.edu/pubs/beamline/27/1/27-1-panofsky.pdf. Retrieved 2008-09-15.
59. ^ Elder, F.R.; et al. (1947). “Radiation from Electrons in a Synchrotron”. Physical Review 71 (11): 829–830. Bibcode 1947PhRv…71..829E. doi:10.1103/PhysRev.71.829.5.
60. ^ Hoddeson, L.; et al. (1997). The Rise of the Standard Model: Particle Physics in the 1960s and 1970s. Cambridge University Press. pp. 25–26. ISBN 0-521-57816-7. http://books.google.com/?id=klLUs2XUmOkC&pg=PA25.
61. ^ Bernardini, C. (2004). “AdA: The First Electron–Positron Collider”. Physics in Perspective 6 (2): 156–183. Bibcode 2004PhP…..6..156B. doi:10.1007/s00016-003-0202-y.
62. ^ “Testing the Standard Model: The LEP experiments”. CERN. 2008. http://public.web.cern.ch/PUBLIC/en/Research/LEPExp-en.html. Retrieved 2008-09-15.
63. ^ “LEP reaps a final harvest”. CERN Courier 40 (10). 2000. http://cerncourier.com/cws/article/cern/28335. Retrieved 2008-11-01.
64. ^ Frampton, P.H. (2000). “Quarks and Leptons Beyond the Third Generation”. Physics Reports 330: 263–348. arXiv:hep-ph/9903387. Bibcode 2000PhR…330..263F. doi:10.1016/S0370-1573(99)00095-2.
66. ^ a b c d e f g h The original source for CODATA is Mohr, P.J.; Taylor, B.N.; Newell, D.B. (2006). “CODATA recommended values of the fundamental physical constants”. Reviews of Modern Physics 80: 633–730. Bibcode 2008RvMP…80..633M. doi:10.1103/RevModPhys.80.633.
Individual physical constants from the CODATA are available at: “The NIST Reference on Constants, Units and Uncertainty”. National Institute of Standards and Technology. http://physics.nist.gov/cuu/. Retrieved 2009-01-15.
67. ^ Zombeck, M.V. (2007). Handbook of Space Astronomy and Astrophysics (3rd ed.). Cambridge University Press. p. 14. ISBN 0-521-78242-2. http://books.google.com/?id=tp_G85jm6IAC&pg=PA14.
68. ^ Murphy, M.T.; et al. (2008). “Strong Limit on a Variable Proton-to-Electron Mass Ratio from Molecules in the Distant Universe”. Science 320 (5883): 1611–1613. Bibcode 2008Sci…320.1611M. doi:10.1126/science.1156352. PMID 18566280. http://www.sciencemag.org/cgi/content/abstract/320/5883/1611.
69. ^ Zorn, J.C.; Chamberlain, G.E.; Hughes, V.W. (1963). “Experimental Limits for the Electron-Proton Charge Difference and for the Charge of the Neutron”. Physical Review 129 (6): 2566–2576. Bibcode 1963PhRv..129.2566Z. doi:10.1103/PhysRev.129.2566.
70. ^ a b Odom, B.; et al. (2006). “New Measurement of the Electron Magnetic Moment Using a One-Electron Quantum Cyclotron”. Physical Review Letters 97 (3): 030801. Bibcode 2006PhRvL..97c0801O. doi:10.1103/PhysRevLett.97.030801. PMID 16907490.
72. ^ Gabrielse, G.; et al. (2006). “New Determination of the Fine Structure Constant from the Electron g Value and QED”. Physical Review Letters 97: 030802(1–4). Bibcode 2006PhRvL..97c0802G. doi:10.1103/PhysRevLett.97.030802.
73. ^ Dehmelt, H. (1988). “A Single Atomic Particle Forever Floating at Rest in Free Space: New Value for Electron Radius”. Physica Scripta T22: 102–110. Bibcode 1988PhST…22..102D. doi:10.1088/0031-8949/1988/T22/016.
74. ^ Meschede, D. (2004). Optics, light and lasers: The Practical Approach to Modern Aspects of Photonics and Laser Physics. Wiley-VCH. p. 168. ISBN 3-527-40364-7. http://books.google.com/?id=PLISLfBLcmgC&pg=PA168.
75. ^ Steinberg, R.I.; et al. (1999). “Experimental test of charge conservation and the stability of the electron”. Physical Review D 61 (2): 2582–2586. Bibcode 1975PhRvD..12.2582S. doi:10.1103/PhysRevD.12.2582.
76. ^ Yao, W.-M. (2006). “Review of Particle Physics”. Journal of Physics G 33 (1): 77–115. arXiv:astro-ph/0601168. Bibcode 2006JPhG…33….1Y. doi:10.1088/0954-3899/33/1/001.
77. ^ a b c Munowitz, M. (2005). Knowing, The Nature of Physical Law. Oxford University Press. pp. 162–218. ISBN 0-19-516737-6. http://books.google.com/?id=IjVtDc85CYwC&pg=PA162.
78. ^ Kane, G. (October 9, 2006). “Are virtual particles really constantly popping in and out of existence? Or are they merely a mathematical bookkeeping device for quantum mechanics?”. Scientific American. http://www.sciam.com/article.cfm?id=are-virtual-particles-rea&topicID=13. Retrieved 2008-09-19.
79. ^ Taylor, J. (1989). “Gauge Theories in Particle Physics”. In Davies, Paul. The New Physics. Cambridge University Press. p. 464. ISBN 0-521-43831-4. http://books.google.com/?id=akb2FpZSGnMC&pg=PA464.
81. ^ Gribbin, J. (January 25, 1997). “More to electrons than meets the eye”. New Scientist. http://www.newscientist.com/article/mg15320662.300-science–more-to-electrons-than-meets-the-eye.html. Retrieved 2008-09-17.
82. ^ Levine, I.; et al. (1997). “Measurement of the Electromagnetic Coupling at Large Momentum Transfer”. Physical Review Letters 78: 424–427. Bibcode 1997PhRvL..78..424L. doi:10.1103/PhysRevLett.78.424.
83. ^ Murayama, H. (March 10–17, 2006). “Supersymmetry Breaking Made Easy, Viable and Generic”. Proceedings of the XLIInd Rencontres de Moriond on Electroweak Interactions and Unified Theories. La Thuile, Italy. arXiv:0709.3041. —lists a 9% mass difference for an electron that is the size of the Planck distance.
84. ^ Schwinger, J. (1948). “On Quantum-Electrodynamics and the Magnetic Moment of the Electron”. Physical Review 73 (4): 416–417. Bibcode 1948PhRv…73..416S. doi:10.1103/PhysRev.73.416.
85. ^ Huang, K. (2007). Fundamental Forces of Nature: The Story of Gauge Fields. World Scientific. pp. 123–125. ISBN 981-270-645-3. http://books.google.com/?id=q-CIFHpHxfEC&pg=PA123.
86. ^ Foldy, L.L.; Wouthuysen, S. (1950). “On the Dirac Theory of Spin 1/2 Particles and Its Non-Relativistic Limit”. Physical Review 78: 29–36. Bibcode 1950PhRv…78…29F. doi:10.1103/PhysRev.78.29.
87. ^ Sidharth, B.G. (2008). “Revisiting Zitterbewegung”. International Journal of Theoretical Physics 48: 497–506. arXiv:0806.0985. Bibcode 2009IJTP…48..497S. doi:10.1007/s10773-008-9825-8.
88. ^ Elliott, R.S. (1978). “The History of Electromagnetics as Hertz Would Have Known It”. IEEE Transactions on Microwave Theory and Techniques 36 (5): 806–823. Bibcode 1988ITMTT..36..806E. doi:10.1109/22.3600. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=3600.
89. ^ Munowitz (2005:140).
90. ^ Crowell, B. (2000). Electricity and Magnetism. Light and Matter. pp. 129–152. ISBN 0-9704670-4-4. http://books.google.com/?id=s9QWZNfnz1oC&pg=PT129.
91. ^ Munowitz (2005:160).
92. ^ Mahadevan, R.; Narayan, R.; Yi, I. (1996). “Harmony in Electrons: Cyclotron and Synchrotron Emission by Thermal Electrons in a Magnetic Field”. Astrophysical Journal 465: 327–337. arXiv:astro-ph/9601073. Bibcode 1996ApJ…465..327M. doi:10.1086/177422.
93. ^ Rohrlich, F. (1999). “The Self-Force and Radiation Reaction”. American Journal of Physics 68 (12): 1109–1112. Bibcode 2000AmJPh..68.1109R. doi:10.1119/1.1286430.
94. ^ Georgi, H. (1989). “Grand Unified Theories”. In Davies, Paul. The New Physics. Cambridge University Press. p. 427. ISBN 0-521-43831-4. http://books.google.com/?id=akb2FpZSGnMC&pg=PA427.
95. ^ Blumenthal, G.J.; Gould, R. (1970). “Bremsstrahlung, Synchrotron Radiation, and Compton Scattering of High-Energy Electrons Traversing Dilute Gases”. Reviews of Modern Physics 42: 237–270. Bibcode 1970RvMP…42..237B. doi:10.1103/RevModPhys.42.237.
96. ^ Staff (2008). “The Nobel Prize in Physics 1927”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1927/. Retrieved 2008-09-28.
97. ^ Chen, S.-Y.; Maksimchuk, A.; Umstadter, D. (1998). “Experimental observation of relativistic nonlinear Thomson scattering”. Nature 396 (6712): 653–655. arXiv:physics/9810036. Bibcode 1998Natur.396..653C. doi:10.1038/25303.
98. ^ Beringer, R.; Montgomery, C.G. (1942). “The Angular Distribution of Positron Annihilation Radiation”. Physical Review 61 (5–6): 222–224. Bibcode 1942PhRv…61..222B. doi:10.1103/PhysRev.61.222.
99. ^ Buffa, A. (2000). College Physics (4th ed.). Prentice Hall. p. 888. ISBN [[Special:BookSources/0130824445}|0130824445}]].
100. ^ Eichler, J. (2005). “Electron–positron pair production in relativistic ion–atom collisions”. Physics Letters A 347 (1–3): 67–72. Bibcode 2005PhLA..347…67E. doi:10.1016/j.physleta.2005.06.105.
101. ^ Hubbell, J.H. (2006). “Electron positron pair production by photons: A historical overview”. Radiation Physics and Chemistry 75 (6): 614–623. Bibcode 2006RaPC…75..614H. doi:10.1016/j.radphyschem.2005.10.008.
102. ^ Quigg, C. (June 4–30, 2000). “The Electroweak Theory”. TASI 2000: Flavor Physics for the Millennium. Boulder, Colorado. p. 80. arXiv:hep-ph/0204104.
103. ^ Mulliken, R.S. (1967). “Spectroscopy, Molecular Orbitals, and Chemical Bonding”. Science 157 (3784): 13–24. Bibcode 1967Sci…157…13M. doi:10.1126/science.157.3784.13. PMID 5338306.
105. ^ a b Grupen, C. (2000). “Physics of Particle Detection”. AIP Conference Proceedings 536: 3–34. arXiv:physics/9906063. doi:10.1063/1.1361756.
106. ^ Jiles, D. (1998). Introduction to Magnetism and Magnetic Materials. CRC Press. pp. 280–287. ISBN 0-412-79860-3. http://books.google.com/?id=axyWXjsdorMC&pg=PA280.
107. ^ Löwdin, P.O.; Erkki Brändas, E.; Kryachko, E.S. (2003). Fundamental World of Quantum Chemistry: A Tribute to the Memory of Per- Olov Löwdin. Springer. pp. 393–394. ISBN 1-4020-1290-X. http://books.google.com/?id=8QiR8lCX_qcC&pg=PA393.
108. ^ McQuarrie, D.A.; Simon, J.D. (1997). Physical Chemistry: A Molecular Approach. University Science Books. pp. 325–361. ISBN 0-935702-99-7. http://books.google.com/?id=f-bje0-DEYUC&pg=PA325.
109. ^ Daudel, R.; et al. (1973). “The Electron Pair in Chemistry”. Canadian Journal of Chemistry 52: 1310–1320. doi:10.1139/v74-201. http://article.pubs.nrc-cnrc.gc.ca/ppv/RPViewDoc?issn=1480-3291&volume=52&issue=8&startPage=1310.
110. ^ Rakov, V.A.; Uman, M.A. (2007). Lightning: Physics and Effects. Cambridge University Press. p. 4. ISBN 0-521-03541-4. http://books.google.com/?id=TuMa5lAa3RAC&pg=PA4.
111. ^ Freeman, G.R. (1999). “Triboelectricity and some associated phenomena”. Materials science and technology 15 (12): 1454–1458.
112. ^ Forward, K.M.; Lacks, D.J.; Sankaran, R.M. (2009). “Methodology for studying particle–particle triboelectrification in granular materials”. Journal of Electrostatics 67 (2–3): 178–183. doi:10.1016/j.elstat.2008.12.002.
113. ^ Weinberg, S. (2003). The Discovery of Subatomic Particles. Cambridge University Press. pp. 15–16. ISBN 0-521-82351-X. http://books.google.com/?id=tDpwhp2lOKMC&pg=PA15.
114. ^ Lou, L.-F. (2003). Introduction to phonons and electrons. World Scientific. pp. 162, 164. ISBN 978-981-238-461-4. http://books.google.com/?id=XMv-vfsoRF8C&pg=PA162.
115. ^ Guru, B.S.; Hızıroğlu, H.R. (2004). Electromagnetic Field Theory. Cambridge University Press. pp. 138, 276. ISBN 0-521-83016-8. http://books.google.com/?id=b2f8rCngSuAC&pg=PA138.
116. ^ Achuthan, M.K.; Bhat, K.N. (2007). Fundamentals of Semiconductor Devices. Tata McGraw-Hill. pp. 49–67. ISBN 0-07-061220-X. http://books.google.com/?id=REQkwBF4cVoC&pg=PA49.
117. ^ a b Ziman, J.M. (2001). Electrons and Phonons: The Theory of Transport Phenomena in Solids. Oxford University Press. p. 260. ISBN 0-19-850779-8. http://books.google.com/?id=UtEy63pjngsC&pg=PA260.
118. ^ Main, P. (June 12, 1993). “When electrons go with the flow: Remove the obstacles that create electrical resistance, and you get ballistic electrons and a quantum surprise”. New Scientist 1887: 30. http://www.newscientist.com/article/mg13818774.500-when-electrons-go-with-the-flow-remove-the-obstacles-thatcreate-electrical-resistance-and-you-get-ballistic-electrons-and-a-quantumsurprise.html. Retrieved 2008-10-09.
119. ^ Blackwell, G.R. (2000). The Electronic Packaging Handbook. CRC Press. pp. 6.39–6.40. ISBN 0-8493-8591-1. http://books.google.com/?id=D0PBG53PQlUC&pg=SA6-PA39.
120. ^ Durrant, A. (2000). Quantum Physics of Matter: The Physical World. CRC Press. p. http://books.google.com/books?id=F0JmHRkJHiUC&pg=PA43. ISBN 0-7503-0721-8.
121. ^ Staff (2008). “The Nobel Prize in Physics 1972”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1972/. Retrieved 2008-10-13.
122. ^ Kadin, A.M. (2007). “Spatial Structure of the Cooper Pair”. Journal of Superconductivity and Novel Magnetism 20 (4): 285–292. arXiv:cond-mat/0510279. doi:10.1007/s10948-006-0198-z.
123. ^ “Discovery About Behavior Of Building Block Of Nature Could Lead To Computer Revolution”. ScienceDaily. July 31, 2009. http://www.sciencedaily.com/releases/2009/07/090730141607.htm. Retrieved 2009-08-01.
124. ^ Jompol, Y.; et al. (2009). “Probing Spin-Charge Separation in a Tomonaga-Luttinger Liquid”. Science 325 (5940): 597–601. Bibcode 2009Sci…325..597J. doi:10.1126/science.1171769. PMID 19644117. http://www.sciencemag.org/cgi/content/abstract/325/5940/597.
125. ^ Staff (2008). “The Nobel Prize in Physics 1958, for the discovery and the interpretation of the Cherenkov effect”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1958/. Retrieved 2008-09-25.
126. ^ Staff (August 26, 2008). “Special Relativity”. Stanford Linear Accelerator Center. http://www2.slac.stanford.edu/vvc/theory/relativity.html. Retrieved 2008-09-25.
127. ^ Adams, S. (2000). Frontiers: Twentieth Century Physics. CRC Press. p. 215. ISBN 0-7484-0840-1. http://books.google.com/?id=yIsMaQblCisC&pg=PA215.
128. ^ Lurquin, P.F. (2003). The Origins of Life and the Universe. Columbia University Press. p. 2. ISBN 0-231-12655-7.
130. ^ Christianto, V. (2007). “Thirty Unsolved Problems in the Physics of Elementary Particles”. Progress in Physics 4: 112–114. http://www.ptep-online.com/index_files/2007/PP-11-16.PDF.
131. ^ Kolb, E.W. (1980). “The Development of Baryon Asymmetry in the Early Universe”. Physics Letters B 91 (2): 217–221. Bibcode 1980PhLB…91..217K. doi:10.1016/0370-2693(80)90435-9.
132. ^ Sather, E. (Spring/Summer 1996). “The Mystery of Matter Asymmetry”. Beam Line. University of Stanford. http://www.slac.stanford.edu/pubs/beamline/26/1/26-1-sather.pdf. Retrieved 2008-11-01.
133. ^ Burles, S.; Nollett, K.M.; Turner, M.S. (1999). “Big-Bang Nucleosynthesis: Linking Inner Space and Outer Space”. arXiv:astro-ph/9903300 [astro-ph].
134. ^ Boesgaard, A.M.; Steigman, G. (1985). “Big bang nucleosynthesis – Theories and observations”. Annual Review of Astronomy and Astrophysics 23 (2): 319–378. Bibcode 1985ARA&A..23..319B. doi:10.1146/annurev.aa.23.090185.001535.
135. ^ a b Barkana, R. (2006). “The First Stars in the Universe and Cosmic Reionization”. Science 313 (5789): 931–934. arXiv:astro-ph/0608450. Bibcode 2006Sci…313..931B. doi:10.1126/science.1125644. PMID 16917052. http://www.sciencemag.org/cgi/content/full/313/5789/931.
136. ^ Burbidge, E.M.; et al. (1957). “Synthesis of Elements in Stars”. Reviews of Modern Physics 29 (4): 548–647. Bibcode 1957RvMP…29..547B. doi:10.1103/RevModPhys.29.547.
137. ^ Rodberg, L.S.; Weisskopf, V. (1957). “Fall of Parity: Recent Discoveries Related to Symmetry of Laws of Nature”. Science 125 (3249): 627–633. Bibcode 1957Sci…125..627R. doi:10.1126/science.125.3249.627. PMID 17810563.
138. ^ Fryer, C.L. (1999). “Mass Limits For Black Hole Formation”. Astrophysical Journal 522 (1): 413–418. arXiv:astro-ph/9902315. Bibcode 1999ApJ…522..413F. doi:10.1086/307647.
139. ^ Parikh, M.K.; Wilczek, F. (2000). “Hawking Radiation As Tunneling”. Physical Review Letters 85 (24): 5042–5045. arXiv:hep-th/9907001. Bibcode 2000PhRvL..85.5042P. doi:10.1103/PhysRevLett.85.5042. PMID 11102182.
140. ^ Hawking, S.W. (1974). “Black hole explosions?”. Nature 248 (5443): 30–31. Bibcode 1974Natur.248…30H. doi:10.1038/248030a0.
141. ^ Halzen, F.; Hooper, D. (2002). “High-energy neutrino astronomy: the cosmic ray connection”. Reports on Progress in Physics 66: 1025–1078. arXiv:astro-ph/0204527. Bibcode 2002astro.ph..4527H. doi:10.1088/0034-4885/65/7/201.
142. ^ Ziegler, J.F. (1998). “Terrestrial cosmic ray intensities”. IBM Journal of Research and Development 42 (1): 117–139. doi:10.1147/rd.421.0117.
143. ^ Sutton, C. (August 4, 1990). “Muons, pions and other strange particles”. New Scientist. http://www.newscientist.com/article/mg12717284.700-muons-pions-and-other-strange-particles-.html. Retrieved 2008-08-28.
144. ^ Wolpert, S. (July 24, 2008). “Scientists solve 30-year-old aurora borealis mystery”. University of California. http://www.universityofcalifornia.edu/news/article/18277. Retrieved 2008-10-11.
145. ^ Gurnett, D.A.; Anderson, R. (1976). “Electron Plasma Oscillations Associated with Type III Radio Bursts”. Science 194 (4270): 1159–1162. Bibcode 1976Sci…194.1159G. doi:10.1126/science.194.4270.1159. PMID 17790910.
146. ^ Martin, W.C.; Wiese, W.L. (2007). “Atomic Spectroscopy: A Compendium of Basic Ideas, Notation, Data, and Formulas”. National Institute of Standards and Technology. http://physics.nist.gov/Pubs/AtSpec/. Retrieved 2007-01-08.
147. ^ Fowles, G.R. (1989). Introduction to Modern Optics. Courier Dover. pp. 227–233. ISBN 0-486-65957-7. http://books.google.com/?id=SL1n9TuJ5YMC&pg=PA227.
148. ^ Staff (2008). “The Nobel Prize in Physics 1989”. The Nobel Foundation. http://nobelprize.org/nobel_prizes/physics/laureates/1989/illpres/. Retrieved 2008-09-24.
149. ^ Ekstrom, P. (1980). “The isolated Electron”. Scientific American 243 (2): 91–101. http://tf.nist.gov/general/pdf/166.pdf. Retrieved 2008-09-24.
150. ^ Mauritsson, J.. “Electron filmed for the first time ever”. Lunds Universitet. http://www.atto.fysik.lth.se/video/pressrelen.pdf. Retrieved 2008-09-17.
151. ^ Mauritsson, J.; et al. (2008). “Coherent Electron Scattering Captured by an Attosecond Quantum Stroboscope”. Physical Review Letters 100: 073003. Bibcode 2008PhRvL.100g3003M. doi:10.1103/PhysRevLett.100.073003. http://www.atto.fysik.lth.se/publications/papers/MauritssonPRL2008.pdf.
152. ^ Damascelli, A. (2004). “Probing the Electronic Structure of Complex Systems by ARPES”. Physica Scripta T109: 61–74. arXiv:cond-mat/0307085. Bibcode 2004PhST..109…61D. doi:10.1238/Physica.Topical.109a00061.
153. ^ Staff (April 4, 1975). “Image # L-1975-02972”. Langley Research Center, NASA. http://grin.hq.nasa.gov/ABSTRACTS/GPN-2000-003012.html. Retrieved 2008-09-20.
154. ^ Elmer, J. (March 3, 2008). “Standardizing the Art of Electron-Beam Welding”. Lawrence Livermore National Laboratory. https://www.llnl.gov/str/MarApr08/elmer.html. Retrieved 2008-10-16.
155. ^ Schultz, H. (1993). Electron Beam Welding. Woodhead Publishing. pp. 2–3. ISBN 1-85573-050-2. http://books.google.com/?id=I0xMo28DwcIC&pg=PA2.
156. ^ Benedict, G.F. (1987). Nontraditional Manufacturing Processes. Manufacturing engineering and materials processing. 19. CRC Press. p. 273. ISBN 0-8247-7352-7. http://books.google.com/?id=xdmNVSio8jUC&pg=PA273.
157. ^ Ozdemir, F.S. (June 25–27, 1979). “Electron beam lithography”. Proceedings of the 16th Conference on Design automation. San Diego, CA, USA: IEEE Press. pp. 383–391. http://portal.acm.org/citation.cfm?id=800292.811744. Retrieved 2008-10-16.
158. ^ Madou, M.J. (2002). Fundamentals of Microfabrication: the Science of Miniaturization (2nd ed.). CRC Press. pp. 53–54. ISBN 0-8493-0826-7. http://books.google.com/?id=9bk3gJeQKBYC&pg=PA53.
159. ^ Jongen, Y.; Herer, A. (May 2–5, 1996). “Electron Beam Scanning in Industrial Applications”. APS/AAPT Joint Meeting. American Physical Society. Bibcode 1996APS..MAY.H9902J.
160. ^ Beddar, A.S. (2001). “Mobile linear accelerators for intraoperative radiation therapy”. AORN Journal 74: 700. doi:10.1016/S0001-2092(06)61769-9. http://findarticles.com/p/articles/mi_m0FSL/is_/ai_81161386. Retrieved 2008-10-26.
161. ^ Gazda, M.J.; Coia, L.R. (June 1, 2007). “Principles of Radiation Therapy”. Cancer Network. http://www.cancernetwork.com/cancer-management/chapter02/article/10165/1165822. Retrieved 2008-10-26.
162. ^ Chao, A.W.; Tigner, M. (1999). Handbook of Accelerator Physics and Engineering. World Scientific. pp. 155, 188. ISBN 981-02-3500-3. http://books.google.com/?id=Z3J4SjftF1YC&pg=PA155.
163. ^ Oura, K.; et al. (2003). Surface Science: An Introduction. Springer-Verlag. pp. 1–45. ISBN 3-540-00545-5.
164. ^ Ichimiya, A.; Cohen, P.I. (2004). Reflection High-energy Electron Diffraction. Cambridge University Press. p. 1. ISBN 0-521-45373-9. http://books.google.com/?id=AUVbPerNxTcC&pg=PA1.
165. ^ Heppell, T.A. (1967). “A combined low energy and reflection high energy electron diffraction apparatus”. Journal of Scientific Instruments 44: 686–688. Bibcode 1967JScI…44..686H. doi:10.1088/0950-7671/44/9/311.
166. ^ McMullan, D. (1993). “Scanning Electron Microscopy: 1928–1965”. University of Cambridge. http://www-g.eng.cam.ac.uk/125/achievements/mcmullan/mcm.htm. Retrieved 2009-03-23.
167. ^ Slayter, H.S. (1992). Light and electron microscopy. Cambridge University Press. p. 1. ISBN 0-521-33948-0. http://books.google.com/?id=LlePVS9oq7MC&pg=PA1.
168. ^ Cember, H. (1996). Introduction to Health Physics. McGraw-Hill Professional. pp. 42–43. ISBN 0-07-105461-8. http://books.google.com/?id=obcmBZe9es4C&pg=PA42.
169. ^ Erni, R.; et al. (2009). “Atomic-Resolution Imaging with a Sub-50-pm Electron Probe”. Physical Review Letters 102 (9): 096101. Bibcode 2009PhRvL.102i6101E. doi:10.1103/PhysRevLett.102.096101. PMID 19392535.
170. ^ Bozzola, J.J.; Russell, L.D. (1999). Electron Microscopy: Principles and Techniques for Biologists. Jones & Bartlett Publishers. pp. 12, 197–199. ISBN 0-7637-0192-0. http://books.google.com/?id=RqSMzR-IXk0C&pg=PA12.
173. ^ Freund, H.P.; Antonsen, T. (1996). Principles of Free-Electron Lasers. Springer. pp. 1–30. ISBN 0-412-72540-1. http://books.google.com/?id=73w9tqTgbiIC&pg=PA1.
176. ^ Staff (2008). “The History of the Integrated Circuit”. The Nobel Foundation. http://nobelprize.org/educational_games/physics/integrated_circuit/history/. Retrieved 2008-10-18.
External links
This information originally retrieved from http://en.wikipedia.org/wiki/Electron
on Wednesday 3rd August 2011 1:35 pm EDT
Now edited and maintained by ManufacturingET.org
You may also like...
Leave a Reply
|
adb8c27ab55f261e | My watch list
History of chemistry
History of science
By era
In early cultures
in Classical Antiquity
In the Middle Ages
In the Renaissance
Scientific Revolution
By topic
Natural sciences
Social sciences
Political science
Agricultural science
Computer science
Materials science
Navigational pages
The history of chemistry is long and convoluted. It begins with the discovery of fire; then metallurgy which allowed purification of metals and the making of alloys, followed by attempts to explain the nature of matter and its transformations through the protoscience of alchemy. Chemistry begins to emerge when the distinction is made between chemistry and alchemy by Robert Boyle in his work The Sceptical Chymist (1661). Chemistry then becomes a full-fledged science when Antoine Lavoisier develops his laws of Conservation of mass, which demands careful measurements and quantitative observations of chemical phenomena. So, while both alchemy and chemistry are concerned with the nature of matter and its transformations, it is only the chemists who apply the scientific method.The history of chemistry is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs.
Additional recommended knowledge
The discovery of fire and atomism
The roots of chemistry can be traced to the phenomenon of burning.[citation needed] Fire was a mystical force that was said to transform one substance into another, and was thus an object of wonder and superstition. Fire affected many aspects of early societies, such as their diet, because it allowed them to cook food, and make pottery, specialised tools and utensils.
Atomism can be traced back to ancient Greece and ancient India.[citation needed] Greek atomism dates back to 440 BCE, as what might be indicated by the book De Rerum Natura (The Nature of Things)[1] written by the Roman Lucretius[2] in 50 BCE. In the book was found ideas traced back to Democritus and Leucippus, who declared that atoms were the most indivisible part of matter. This coincided with a similar declaration by Indian philosopher Kanada in his Vaisheshika sutras around the same time period.[3] Kashyapa may have arrived at his sutras by meditation. By similar means, he coined a form of Newton's Third Law (action/reaction), and discussed the existence of gases. What Kanada declared by sutra, Democritus declared by philosophical musing. Both suffered from a lack of empirical data. Without scientific proof, the existence of atoms was easy to deny. Aristotle opposed the existence of atoms in 330 BC; and the atomism of the Vaisheshika school was also opposed for a long time.[citation needed]
In Europe, the Church raised Aristotle's writings almost to the level of scripture, associating atomism as some form of heresy. Aristotle's writings were preserved in Arabic in the Muslim world, and were later translated to Latin by St. Thomas Aquinas and alchemist Roger Bacon in the 13th century.
The rise of metallurgy
It was fire that led to the discovery of glass and the purification of metals which in turn gave way to the rise of metallurgy.[citation needed] During the early stages of metallurgy, methods of purification of metals were sought, and gold, known in ancient Egypt as early as 2600 BCE, became a precious metal. The discovery of alloys heralded the Bronze Age. After the Bronze Age, the history of metallurgy was marked by which army had better weaponry. Countries in Eurasia had their heydays when they made the superior alloys, which, in turn, made better armour and better weapons. This often determined the outcomes of battles.[citation needed]
Indian metallurgy and alchemy
Significant progress in metallurgy and alchemy was made in ancient India. Will Durant wrote in The Story of Civilization I: Our Oriental Heritage:
The philosopher's stone and the rise of alchemy
Main article: Alchemy
Many people were interested in finding a method that could convert cheaper metals into gold. The material that would help them do this was rumored to exist in what was called the philosopher's stone. This led to the protoscience called alchemy. Alchemy was practiced by many cultures throughout history and often contained a mixture of philosophy, mysticism, and protoscience.[citation needed]
Alchemy not only sought to turn base metals into gold, but especially in a Europe rocked by bubonic plague, there was hope that alchemy would lead to the development of medicines to improve people's health. The holy grail of this strain of alchemy was in the attempts made at finding the elixir of life, which promised eternal youth. Neither the elixir nor the philosopher's stone were ever found. Also, characteristic of alchemists was the belief that there was in the air an "ether" which breathed life into living things.[citation needed] Practicioners of alchemy included Isaac Newton, who remained one throughout his life.
Problems encountered with alchemy
There were several problems with alchemy, as seen from today's standpoint. There was no systematic naming system for new compounds, and the language was esoteric and vague to the point that the terminologies meant different things to different people. In fact, according to The Fontana History of Chemistry (Brock, 1992):
The language of alchemy soon developed an arcane and secretive technical vocabulary designed to conceal information from the uninitiated. To a large degree, this language is incomprehensible to us today, though it is apparent that readers of Geoffery Chaucer's Canon's Yeoman's Tale or audiences of Ben Jonson's The Alchemist were able to construe it sufficiently to laugh at it.[4]
Chaucer's tale exposed the more fraudulent side of alchemy, especially the manufacture of counterfeit gold from cheap substances. Soon after Chaucer, Dante Alighieri also demonstrated an awareness of this fraudulence, causing him to consign all alchemists to the Inferno in his writings. Soon after, in 1317, the Avignon Pope John XXII ordered all alchemists to leave France for making counterfeit money. A law was passed in England in 1403 which made the "multiplication of metals" punishable by death. Despite these and other apparently extreme measures, alchemy did not die. Royalty and privileged classes still sought to discover the philosopher's stone and the elixir of life for themselves.[5]
There was also no agreed-upon scientific method for making experiments reproducible. Indeed many alchemists included in their methods irrelevant information such as the timing of the tides or the phases of the moon. The esoteric nature and codified vocabulary of alchemy appeared to be more useful in concealing the fact that they could not be sure of very much at all. As early as the 14th century, cracks seemed to grow in the facade of alchemy; and people became sceptical.[citation needed] Clearly, there needed to be a scientific method where experiments can be repeated by other people, and results needed to be reported in a clear language that laid out both what is known and unknown.
Beginnings of chemistry
Early chemists
See also: Alchemy (Islam)
The development of the modern scientific method was slow and arduous, but an early scientific method for chemistry began emerging among early Muslim chemists. One of the most influential among them was the 9th century chemist Geber, who some consider to be the "father of chemistry".[6] [7] [8] Other influential Muslim chemists included Al-Razi, Abu-Rayhan Biruni and Al-Kindi. Alexander von Humboldt regarded the Muslim chemists as the founders of chemistry. [9]
Will Durant wrote in The Story of Civilization IV: The Age of Faith:
For the more honest practitioners in Europe, alchemy was an intellectual pursuit, and over time, they got better at it. Paracelsus (1493-1541), for example, rejected the 4-elemental theory and with only a vague understanding of his chemicals and medicines, formed a hybrid of alchemy and science in what was to be called iatrochemistry. Paracelsus was not perfect in making his experiments truly scientific. For example, as an extension of his theory that new compounds could be made by combining mercury with sulfur, he once made what he thought was "oil of sulfur". This was actually dimethyl ether, which had neither mercury nor sulfur.[citation needed]
The first alchemist considered to have applied the modern scientific method to alchemy and to separate chemistry further from alchemy was Robert Boyle (1627–1691).[citation needed] Robert Boyle was an atomist, but favoured the word corpuscle over atoms. He comments that the finest division of matter where the properties are retained is at the level of corpuscles.
Boyle was credited with the discovery of Boyle's Law. He is also credited for his landmark publication The Sceptical Chymist, where he attempts to develop an atomic theory of matter, with no small degree of success.
Despite all these advances, the person celebrated as the "father of modern chemistry" is Antoine Lavoisier who developed his law of Conservation of mass in 1789, also called Lavoisier's Law.[citation needed] With this, Chemistry was allowed to have a strict quantitative nature, allowing reliable predictions to be made.
Antoine Lavoisier
Although the archives of chemical research draw upon work from ancient Babylonia, Egypt, and especially the Arabs and Persians after Islam, modern chemistry flourished from the time of Antoine Lavoisier, who is regarded as the "father of modern chemistry", particularly for his discovery of the law of conservation of mass, and his refutation of the phlogiston theory of combustion in 1783. (Phlogiston was supposed to be an imponderable substance liberated by flammable materials in burning.) Mikhail Lomonosov independently established a tradition of chemistry in Russia in the 18th century.[citation needed] Lomonosov also rejected the phlogiston theory, and anticipated the kinetic theory of gases.[citation needed] He regarded heat as a form of motion, and stated the idea of conservation of matter.
The vitalism debate and organic chemistry
After the nature of combustion (see oxygen) was settled, another dispute, about vitalism and the essential distinction between organic and inorganic substances, was revolutionized by Friedrich Wöhler's accidental synthesis of urea from inorganic substances in 1828. Never before had an organic compound been synthesized from inorganic material.[citation needed] This opened a new research field in chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The most important among them are mauve, magenta, and other synthetic dyes, as well as the widely used drug aspirin. The discovery also contributed greatly to the theory of isomerism.[citation needed]
Disputes about atomism after Lavoisier
Throughout the 19th century, chemistry was divided between those who followed the atomic theory of John Dalton and those who did not, such as Wilhelm Ostwald and Ernst Mach.[11] Although such proponents of the atomic theory as Amedeo Avogadro and Ludwig Boltzmann made great advances in explaining the behavior of gases, this dispute was not finally settled until Jean Perrin's experimental investigation of Einstein's atomic explanation of Brownian motion in the first decade of the 20th century.[11]
Well before the dispute had been settled, many had already applied the concept of atomism to chemistry. A major example was the ion theory of Svante Arrhenius which anticipated ideas about atomic substructure that did not fully develop until the 20th century. Michael Faraday was another early worker, whose major contribution to chemistry was electrochemistry, in which (among other things) a certain quantity of electricity during electrolysis or electrodeposition of metals was shown to be associated with certain quantities of chemical elements, and fixed quantities of the elements therefore with each other, in specific ratios.[citation needed] These findings, like those of Dalton's combining ratios, were early clues to the atomic nature of matter.
The periodic table
Main article: History of the periodic table
For many decades, the list of known chemical elements had been steadily increasing. A great breakthrough in making sense of this long list (as well as, eventually, in understanding the internal structure of atoms as discussed below) was Dmitri Mendeleev and Lothar Meyer's development of the periodic table, and, particularly, Mendeleev's use of it to predict the existence and the properties of germanium, gallium, and scandium, which Mendeleev called ekasilicon, ekaaluminium, and ekaboron respectively. Mendeleev made his prediction in 1870; gallium was discovered in 1875, and was found to have roughly the same properties that Mendeleev predicted for it.[citation needed]
The modern definition of chemistry
Classically, before the 20th century, chemistry was defined as the science of the nature of matter and its transformations. It was therefore clearly distinct from physics which was not concerned with such dramatic transformation of matter. Moreover, in contrast to physics, chemistry was not using much of mathematics. Even some were particularly reluctant to using mathematics within chemistry. For example, Auguste Comte wrote in 1830:
Every attempt to employ mathematical methods in the study of chemical questions must be considered profoundly irrational and contrary to the spirit of chemistry.... if mathematical analysis should ever hold a prominent place in chemistry -- an aberration which is happily almost impossible -- it would occasion a rapid and widespread degeneration of that science.
However, in the second part of the 19th century, the situation changed and August Kekule wrote in 1867:
I rather expect that we shall someday find a mathematico-mechanical explanation for what we now call atoms which will render an account of their properties.
After the discovery by Ernest Rutherford and Niels Bohr of the atomic structure in 1912, and by Marie and Pierre Curie of radioactivity, scientists had to change drastically their viewpoint on the nature of matter. The experience acquired by chemists was no longer pertinent to the study of the whole nature of matter but only to aspects related to the electron cloud surrounding the atomic nuclei and the movement of the latter in the electric field induced by the former (see Born-Oppenheimer approximation). The range of chemistry was thus restricted to the nature of matter around us in conditions which are not too far from standard conditions for temperature and pressure and in cases where the exposure to radiation is not too different from the natural microwave, visible or UV radiations on Earth. Chemistry was therefore re-defined as the science of matter that deals with the composition, structure, and properties of substances and with the transformations that they undergo.[citation needed] However the meaning of matter used here relates explicitly to substances made of atoms and molecules, disregarding the matter within the atomic nuclei and its nuclear reaction or matter within highly ionized plasmas. Nevertheless the field of chemistry is still, on our human scale, very broad and the claim that chemistry is everywhere is accurate.
Quantum chemistry
Main article: Quantum chemistry
Some view the birth of quantum chemistry in the discovery of the Schrödinger equation and its application to hydrogen atom in 1926.[citation needed] However, the 1927 article of Walter Heitler and Fritz London [12] is often recognised as the first milestone in the history of quantum chemistry.[citation needed] This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Edward Teller, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Aleksandrovich Fock, to cite a few.[citation needed]
Still, skepticism remained as to the general power of quantum mechanics applied to complex chemical systems.[citation needed] The situation around 1930 is described by Paul Dirac:[13]
"The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation. Hence the quantum mechanical methods developed in the 1930s and 1940s are often referred to as theoretical molecular or atomic physics to underline the fact that they were more the application of quantum mechanics to chemistry and spectroscopy than answers to chemically relevant questions."
In the 1940s many physicists turned from molecular or atomic physics to nuclear physics (like J. Robert Oppenheimer or Edward Teller). In 1951, a milestone article in quantum chemistry is the seminal paper of Clemens C. J. Roothaan on Roothaan equations.[14] It opened the avenue to the solution of the self-consistent field equations for small molecules like hydrogen or nitrogen. Those computations were performed with the help of tables of integrals which were computed on the most advanced computers of the time.[citation needed]
Molecular biology and biochemistry
By the mid 20th century, in principle, the integration of physics and chemistry was extensive, with chemical properties explained as the result of the electronic structure of the atom; Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. However, though some principles deduced from quantum mechanics were able to predict qualitatively some chemical features for biologically relevant molecules, they were, till the end of the 20th century, more a collection of rules, observations, and recipes than rigorous ab initio quantitative methods.[citation needed]
This heuristic approach triumphed in 1953 when James Watson and Francis Crick deduced the double helical structure of DNA by constructing models constrained by and informed by the knowledge of the chemistry of the constituent parts and the X-ray diffraction patterns obtained by Rosalind Franklin.[15] This discovery lead to an explosion of research into the biochemistry of life.
In the same year, the Miller-Urey experiment demonstrated that basic constituents of protein, simple amino acids, could themselves be built up from simpler molecules in a simulation of primordial processes on Earth. Though many questions remain about the true nature of the origin of life, this was the first attempt by chemists to study hypothetical processes in the laboratory under controlled conditions.[citation needed]
In 1983 Kary Mullis devised a method for the in-vitro amplification of DNA, known as the polymerase chain reaction (PCR), which revolutionized the chemical processes used in the laboratory to manipulate it. PCR could be used to synthesize specific pieces of DNA and made possible the sequencing of DNA of organisms, which culminated in the huge human genome project.[citation needed]
Chemical industry
Main article: Chemical industry
The later part of the nineteenth century saw a huge increase in the exploitation of petroleum extracted from the earth for the production of a host of chemicals and largely replaced the use of whale oil, coal tar and naval stores used previously. Large scale production and refinement of petroleum provided feedstocks for liquid fuels such as gasoline and diesel, solvents, lubricants, asphalt, waxes, and for the production of many of the common materials of the modern world, such as synthetic fibers, plastics, paints, detergents, pharmaceuticals, adhesives and ammonia as fertilizer and for other uses. Many of these required new catalysts and the utilization of chemical engineering for their cost-effective production.[citation needed]
In the mid-twentieth century, control of the electronic structure of semiconductor materials was made precise by the creation of large ingots of extremely pure single crystals of silicon and germanium. Accurate control of their chemical composition by doping with other elements made the production of the solid state transistor in 1951 and made possible the production of tiny integrated circuits for use in electronic devices, especially computers, which revolutionized the world.[citation needed]
See also
Histories and timelines
listed chronologically:
1. ^ Lucretius (50 BCE). de Rerum Natura (On the Nature of Things). The Internet Classics Archive. Massachusetts Institute of Technology. Retrieved on 2007-01-09.
2. ^ Simpson, David (29 June 2005). Lucretius (c. 99 - c. 55 BCE). The Internet History of Philosophy. Retrieved on 2007-01-09.
3. ^ Will Durant (1935), Our Oriental Heritage:
4. ^ Brock, William H. (1992). The Fontana History of Chemistry. London, England: Fontana Press, 32-33.
8. ^ Paul Vallely. How Islamic inventors changed the world. The Independent.
11. ^ a b Pullman, Bernard (2004). The Atom in the History of Human Thought. USA: Oxford University Press Inc.
12. ^ W. Heitler and F. London, Wechselwirkung neutraler Atome und Homöopolare Bindung nach der Quantenmechanik, Z. Physik, 44, 455 (1927).
13. ^ P.A.M. Dirac, Quantum Mechanics of Many-Electron Systems, Proc. R. Soc. London, A 123, 714 (1929).
14. ^ C.C.J. Roothaan, A Study of Two-Center Integrals Useful in Calculations on Molecular Structure, J. Chem. Phys., 19, 1445 (1951).
15. ^ Watson, J. and Crick, F., "Molecular Structure of Nucleic Acids" Nature, April 25, 1953, p 737–8
• Selected classic papers from the history of chemistry
• Biographies of chemists
• Eric R. Scerri, The Periodic Table: Its Story and Its Significance, Oxford University Press, 2006.
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "History_of_chemistry". A list of authors is available in Wikipedia. |
dcc468bc9263d756 | In our course on physical chemistry, which involves MOT, we have been taught that in the LCAO approach, the wave function for a molecule … say hydrogen ion ($\ce{H2+}$), can be approximated by a linear combination of two atomic orbitals.
From what I could gather from the very little I know about variational method is that this linear combination is going to be then optimised to give the minimum energy to find the ground state of the molecule in question. Is this correct?
Also, if we are simply using the linear combination as a test/guess function for a variational method, why should we restrict ourselves to only (say $+$ and $-$) two combinations? That is, how does the rule "Number of molecular orbitals = Number of atomic orbitals constituting them" arise?
Also, why is there the restriction in form of the rule that "Only atomic orbitals of close energies and proper symmetries combine to give MOs"?
• $\begingroup$ I think the answer is great. The short answer to "proper symmetries" is that orbitals of different symmetry are orthogonal, so they can't combine to give MOs. $\endgroup$ – Geoff Hutchison Nov 10 '14 at 19:43
• $\begingroup$ By saying that we restrict to only $+$ and $-$ combinations, are you considering only the specific case of $H_{2}^{+}$? If so, the reason that we always have a symmetric or antisymmetric combination of the two $s$ orbitals is the mirror symmetry with respect to the plane bisecting the molecule. $\endgroup$ – higgsss Dec 16 '15 at 2:12
You are right to be questioning the validity of this method, and I congratulate you for doing so. This is actually an extremely important skill, that differentiates the best students from the rest.
The variational method, like many methods from physical chemistry, is a method of approximation (a model) to what really happens. There exist a whole hierarchy of methods for computing the molecular orbitals of molecules (which are themselves models, being the stationary states of the Schrödinger equation), including at the top end post-Hartree-Fock theory and density functional theory. These methods provide quantitative information about molecular orbitals, but also require serious computer power (which is generally unavailable to undergraduate students). However, all of the basic physics can be explored and understood with simpler models that can be solved on a few sheets of paper, such as LCAO theory, hence why we teach them to undergraduates.
As regards to your question, we do indeed optimise a linear combination. We start off by assuming that the total molecular orbital wavefunction can be approximated using a linear set of atomic orbitals:
$$|\Psi\rangle = c_1|\phi_1\rangle + c_2|\phi_2\rangle + \cdots + c_n|\phi_n\rangle$$
We then need to find the coefficients. The variational principle (also known as the Rayleigh-Ritz method) states that the coefficients that give the best approximation to the wavefunction will minimize the energy, given by
$$\mathcal{E} = \frac{\langle\Psi|\hat{H}|\Psi\rangle}{\langle\Psi|\Psi\rangle}$$
Now, the second part of your question involves how to compute the two terms in this fraction. Without going into massive mathematical detail, the LCAO method can be recast into a matrix problem rather than an integral problem using the atomic orbitals as a form of basis vector. In the simplest case (Hückel theory) we assume that they are normalized, such that the denominator is always 1 or 0 (see later). The problem now is mostly how to determine the numerator.
In brief, each element of the Hamiltonian matrix is given by
$$H_{ij} = \langle\phi_i|\hat{H}|\phi_j\rangle$$
The variational principle applied to this matrix implies that the optimized energies are the eigenvalues of the Hamiltonian matrix, and the coefficients are given by the eigenvectors. Since an $n$ x $n$ matrix has precisely $n$ eigenvalues, this implies the "number of molecular orbitals" rule. Notice that the reason for the +/- combinations is by our own design (LCAO). We could have picked more complex trial functions by assumption, but this would make computation far more difficult.
It most certainly is true that in order for there to be a significant interaction, two orbitals must be close in energy. The detailed reasons are complex, but essentially it comes down to the size of the $\langle\phi_i|\hat{H}|\phi_j\rangle$. Orbitals that are far apart in energy have small values of this term hence interact weakly. This doesn't necessarily mean they can't form molecular orbitals, however, but these effects are negligible and are not important in understanding the chemistry, so are neglected in simple models (although are often included in some of the most complex modern methods).
The reason that only atomic orbitals of the same symmetry give molecular orbitals is the overlap integral $\langle\phi|\phi\rangle$. This is the total sum (integral) of the product of the AOs. This is zero for different symmetries. This must be non-zero to give an MO. Consider a $\mathrm{2p_z}$-orbital and a $\mathrm{1s}$-orbital for instance (for illustration purposes only!). The s-orbital is spherically symmetric, with all points the same sign. The $\mathrm{p_z}$-orbital has a "dumbbell" shape, with equal areas of different signs above and below the $xy$-plane. Therefore, the total sum of the product is zero. No overlap = no interaction.
• $\begingroup$ Thank you very much for your answer! What I don't get is how does variational principle produce the energy of the higher states? After you answered, I searched for variation principle and the Hamiltonian matrix and I came across linear variation method, which minimises the energy wrt each coefficient and from condition for non trivial solutions we get the values of energy. How do we know that these correspond to the higher states of the molecule? $\endgroup$ – transistor Nov 11 '14 at 3:57
• 1
$\begingroup$ Whilst it is true that the variational principle is only rigorously true for the ground state, one can in principle use this to extract higher states based upon the orthogonality of the wavefunctions: a consequence of the Hermiticity of the Hamiltonian operator. This is possible if you know a quantum number (such as energy) that differentiates the states. For some trial wavefunctions, this involves complex projection operations, however due to the way the LCAO can be formulated using Hermitian matrices, we can get a bunch of orthogonal higher states "for free" $\endgroup$ – DrHarps Nov 11 '14 at 17:13
• 1
$\begingroup$ @DrHarps the variational principle is perfectly valid for excited states. The issue is that there is a difference between molecular orbitals $\psi$ and the state that they describe $\Psi$. This discussion has focused on the solution of one $\Psi$ (the ground state), but it is possible to solve for higher states $\Psi$ using techniques such as multiconfigurational HF theory. A somewhat crude description is that an excited state is the same MO set as the ground state, but with different occupations, orbital coefficients, and CI coefficients, cf. the State-Average CASSCF method. $\endgroup$ – Eric Brown Nov 13 '14 at 4:21
Your Answer
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.